AI To Combat “Fake News”

Ai to fight fake news
4 minutes

Fake news. We’ve probably fallen victim to it once or twice, and it does a perfect job of illustrating how everyday technology platforms like Facebook can come with a few unintended consequences, but can AI combat disinformation on social media?

Over two thirds of Europeans encounter fake news at least once a week, and the frequency with which we encounter them, its potential to influence the way we think, and ultimately vote, has brought it under the spotlight. 

There are a considerable number of initiatives aimed at countering disinformation worldwide.

According to the latest figures published by the Duke Reporters’ Lab, there are more than 300 fact-checking projects active in more than 50 countries.

To date, fact-checking has been mostly based on manual human intervention to verify the authenticity of information, but as the volume of disinformation continues to grow, manual fact-checking is proving to be ineffective and inefficient in evaluating every piece of information that appears online.

Is this where AI steps in?

The first talks of automating online fact-checking took place around a decade ago, but the last few years have seen many waves of funding destined for automated fact-checking initiatives which could potentially help practitioners identify, verify and correct social media content. In our modern world, AI, big data and machine learning (ML) have emerged as potent tools to track news stories and identify fake news items.

Is AI the answer?

Even Mark Zuckerberg has sounded a cautious note about AI’s capability to meet the complex, contextual, and inherently human challenge of correctly understanding every missive a social user might send. The real challenge is moderating content on platforms that are used by billions of users, which is technically difficult as it requires building AI that can read and understand news in 200+ languages across the globe. Further to this, Facebook realised that manual fact-checking on its own wouldn’t solve the fake news problem. 

As AI can be leveraged to find words or even patterns of words that can throw light on fake news stories, Zuckerberg’s organisation eventually turned to AI to arrest the problem.

Why? This is because AI can learn behaviours, made possible through pattern recognition, and can do it at scale. (Source

By harnessing AI’s power to do just this, fake news can be more quickly identified by taking a cue from articles that were flagged as inaccurate by people in the past.

By identifying what stories are fake and which ones are real, AI can also highlight  specific patterns and behaviours that manifest themselves in the inception of fake news. 

Industry examples

A specific example of an AI solution that relies on this kind of method to combat social media disinformation is Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect fake news. The start-up says its deep learning algorithms are capable of learning patterns on complex, distributed data sets like social networks. Studies have shown that false news spreads faster than real news online, which is a pattern that Fabula initially used to help their AI spot misinformation. 

Fabula focuses on detecting differences in how content is spreading on social media and using that to allocate an Authenticity score. The deep learning startup was acquired by Twitter, as the social media giant remains under increasing political pressure to get a handle on online disinformation to ensure that manipulative messages don’t, for example, get a free pass to fiddle with democratic processes!

Delphine Reynaud, Head of Strategy at Inpulsus, a strategic consultancy for influence(r) marketing, details this reality of fast-spreading fake news:

“So much of the algorithms in play at every social network have valued engagement over any “scoring” of authenticity, so it’s not surprising that disinformation - usually on divisive topics - will always drive a higher position in your newsfeed rather than real news. Real news can be boring - the algorithms don’t like boring.”

There are many more ways in which AI can be leveraged to fight fake news. Scoring web pages is a method pioneered by the tech giant Google, which takes the accuracy of facts presented to score web pages. The tech has grown in significance as it tries to understand pages’ content without relying on third party signals. AI is also now at the core of ascertaining the semantic meaning of a web article in order to weigh facts in context. For instance, an NLP engine can sift through the subject of a story, headline, main body text, and geo-location, and find out very quickly if other sites are reporting the same. 

Using the same method, AI can also discover sensational words used to spread fake news further. Sensational words can often litter headlines and can become a lure to attract more readers and spread the news faster and wider. Can you say clickbait? AI has been instrumental in discovering and flagging fake news headlines by using keyword analytics, helping big platforms detect false stories, as well as spot duplicates of stories that have already been debunked.

Are there limitations?

How far does AI currently go in combating fake news on social media? The difficulty of combating disinformation is that fake news doesn’t always include facts that can be checked. 

This could involve a distorted or mis-captioned image, highly tendentious or biased reporting, or misleading stories that are not based on facts but rather use specious arguments to promote a particular cause. Another issue is false positives generated by satire or parody articles – how can AI respond to these issues?

This is where response-based identification comes into play. Instead of purely relying on the textual content of an article as a primary source of information, AI can examine patterns of propagation as the news spreads through social channels. By looking at likes, comments, temporal patterns in the spread of stories, and the reputation of those who post and engage with the content, analysts can use this information to build a very clear idea of how trustworthy it is.

There is also the growing topic of inherent bias in any AI model. Where was it created and by whom? Do the outputs of an AI model unknowingly reflect a singular particular viewpoint based on its initial structure?

But, this is only the start. Even though tools such as these are crucial in the front-line attack against social media disinformation, publishers and other organisations need to back these up with robust intervention strategies to take down or limit the spread of this content as soon as it appears. Ultimately, these innovations are best used as screening tools for human fact-checkers at social media and news organisations, as they can augment their capabilities and flag information that doesn’t look quite right for verification. It isn’t designed to replace people, but to augment process to help them fact-check faster and more reliably. Above all, it can be used to empower organisations to uncover the truth and keep everybody informed.

Do you want to find out more?

TO FIND OUT MORE ABOUT THE PROJECT & OUR SERVICES, GET IN TOUCH WITH THE TEAM.