ΑΙhub.org
 

Algorithms can be useful in detecting fake news, stopping its spread and countering misinformation


by
23 June 2023



share this:
stack of newspapers

Laks V.S. Lakshmanan, University of British Columbia

Fake news is a complex problem and can span text, images and video.

For written articles in particular, there are several ways of generating fake news. A fake news article could be produced by selectively editing facts, including people’s names, dates or statistics. An article could also be completely fabricated with made-up events or people.

Fake news articles can also be machine-generated as advances in artificial intelligence make it particularly easy to generate misinformation.

Damaging effects

Questions like: “Was there voter fraud during the 2020 U.S. elections?” or “Is climate change a hoax?” can be fact-checked by analyzing available data. These questions can be answered with true or false, but there is potential for misinformation surrounding questions like these.

Misinformation and disinformation — or fake news — can have damaging effects on a large number of people in a short time. Although the notion of fake news has existed well before technological advances, social media have exacerbated the problem.

A 2018 Twitter study showed that false news stories were more commonly retweeted by humans than bots, and 70 per cent more likely to be retweeted than true stories. The same study found that it took true stories approximately six times longer to reach a group of 1,500 people and, while true stories rarely reached more than 1,000 people, popular false news could spread up to 100,000.

The 2020 U.S. presidential election, COVID-19 vaccines and climate change have all been the subject of misinformation campaigns with grave consequences. It is estimated that misinformation surrounding COVID-19 costs between US$50-300 million daily. The cost of political misinformation could be civil disorder, violence or even erosion of public trust in democratic institutions.

Detecting misinformation

Detecting misinformation can be done by a combination of algorithms, machine-learning models and humans. An important question is who is responsible for controlling, if not stopping, the spread of misinformation once it’s detected. Only social media companies are really in the position to exercise control over the spread of information through their networks.

A particularly simple but effective means of generating misinformation is to selectively edit news articles. For example, consider “Ukrainian director and playwright arrested and accused of ‘justifying terrorism.’” This was achieved by replacing “Russian” with “Ukrainian” in the original sentence in a real news article.

A multi-faceted approach is needed to detect misinformation online in order to control its growth and spread.

Communications in social media can be modelled as networks, with the users forming points in the network model and the communications forming links between them; a retweet or like of a post reflects a connection between two points. In this network model, spreaders of misinformation tend to form much more densely connected core-periphery structures than users spreading truth.

My research group has developed efficient algorithms for detecting dense structures from communication networks. This information can be analyzed further for detecting instances of misinformation campaigns.

Since these algorithms rely on communication structure alone, content analysis conducted by algorithms and humans is needed to confirm instances of misinformation.

Detecting manipulated articles takes careful analysis. Our research used a neural network-based approach that combines textual information with an external knowledge base to detect such tampering.

Stopping the spread

Detecting misinformation is just half the battle — decisive action is required to stop its spread. Strategies for combating the spread of misinformation in social networks include both intervention by internet platforms and launching counter-campaigns to neutralize fake news campaigns.

Intervention can take hard forms, like suspending a user’s account, or softer measures like labelling a post as suspicious.

Algorithms and AI-powered networks are not 100 per cent reliable. There is a cost to intervening on a true item by mistake as well as not intervening on a fake item.

To that end, we designed a smart intervention policy that automatically decides whether to intervene on an item based on its predicted truthiness and predicted popularity.

Countering fake news

Launching counter-campaigns to minimize if not neutralize the effects of misinformation campaigns needs to factor in the major differences between truth and fake news in terms of how quickly and extensively each of them spreads.

Besides these differences, reactions to stories can vary depending on the user, topic and length of the post. Our approach takes all these factors into account and devises an efficient counter campaign strategy that effectively mitigates the propagation of misinformation.

Recent advances in generative AI, particularly those powered by large language models such ChatGPT, make it easier than ever to create articles at great speed and significant volume, raising the challenge of detecting misinformation and countering its spread at scale and in real time. Our current research continues to address this ongoing challenge which has enormous societal impact.The Conversation

Laks V.S. Lakshmanan, Professor of Computer Science, University of British Columbia

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.

The Machine Ethics podcast: moral agents with Jen Semler

In this episode, Ben and Jen Semler talk about what makes a moral agent, the point of moral agents, philosopher and engineer collaborations, and more.

Extending the reward structure in reinforcement learning: an interview with Tanmay Ambadkar

  23 Feb 2026
Find out more about Tanmay's research on RL frameworks, the latest in our series meeting the AAAI Doctoral Consortium participants.

The Good Robot podcast: what makes a drone “good”? with Beryl Pong

  20 Feb 2026
In this episode, Eleanor and Kerry talk to Beryl Pong about what it means to think about drones as “good” or “ethical” technologies.

Relational neurosymbolic Markov models

and   19 Feb 2026
Relational neurosymbolic Markov models make deep sequential models logically consistent, intervenable and generalisable

AI enables a Who’s Who of brown bears in Alaska

  18 Feb 2026
A team of scientists from EPFL and Alaska Pacific University has developed an AI program that can recognize individual bears in the wild, despite the substantial changes that occur in their appearance over the summer season.

Learning to see the physical world: an interview with Jiajun Wu

and   17 Feb 2026
Winner of the 2019 AAAI / ACM SIGAI dissertation award tells us about his current research.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence