ΑΙhub.org
 

Algorithms can be useful in detecting fake news, stopping its spread and countering misinformation

by
23 June 2023



share this:
stack of newspapers

Laks V.S. Lakshmanan, University of British Columbia

Fake news is a complex problem and can span text, images and video.

For written articles in particular, there are several ways of generating fake news. A fake news article could be produced by selectively editing facts, including people’s names, dates or statistics. An article could also be completely fabricated with made-up events or people.

Fake news articles can also be machine-generated as advances in artificial intelligence make it particularly easy to generate misinformation.

Damaging effects

Questions like: “Was there voter fraud during the 2020 U.S. elections?” or “Is climate change a hoax?” can be fact-checked by analyzing available data. These questions can be answered with true or false, but there is potential for misinformation surrounding questions like these.

Misinformation and disinformation — or fake news — can have damaging effects on a large number of people in a short time. Although the notion of fake news has existed well before technological advances, social media have exacerbated the problem.

A 2018 Twitter study showed that false news stories were more commonly retweeted by humans than bots, and 70 per cent more likely to be retweeted than true stories. The same study found that it took true stories approximately six times longer to reach a group of 1,500 people and, while true stories rarely reached more than 1,000 people, popular false news could spread up to 100,000.

The 2020 U.S. presidential election, COVID-19 vaccines and climate change have all been the subject of misinformation campaigns with grave consequences. It is estimated that misinformation surrounding COVID-19 costs between US$50-300 million daily. The cost of political misinformation could be civil disorder, violence or even erosion of public trust in democratic institutions.

Detecting misinformation

Detecting misinformation can be done by a combination of algorithms, machine-learning models and humans. An important question is who is responsible for controlling, if not stopping, the spread of misinformation once it’s detected. Only social media companies are really in the position to exercise control over the spread of information through their networks.

A particularly simple but effective means of generating misinformation is to selectively edit news articles. For example, consider “Ukrainian director and playwright arrested and accused of ‘justifying terrorism.’” This was achieved by replacing “Russian” with “Ukrainian” in the original sentence in a real news article.

A multi-faceted approach is needed to detect misinformation online in order to control its growth and spread.

Communications in social media can be modelled as networks, with the users forming points in the network model and the communications forming links between them; a retweet or like of a post reflects a connection between two points. In this network model, spreaders of misinformation tend to form much more densely connected core-periphery structures than users spreading truth.

My research group has developed efficient algorithms for detecting dense structures from communication networks. This information can be analyzed further for detecting instances of misinformation campaigns.

Since these algorithms rely on communication structure alone, content analysis conducted by algorithms and humans is needed to confirm instances of misinformation.

Detecting manipulated articles takes careful analysis. Our research used a neural network-based approach that combines textual information with an external knowledge base to detect such tampering.

Stopping the spread

Detecting misinformation is just half the battle — decisive action is required to stop its spread. Strategies for combating the spread of misinformation in social networks include both intervention by internet platforms and launching counter-campaigns to neutralize fake news campaigns.

Intervention can take hard forms, like suspending a user’s account, or softer measures like labelling a post as suspicious.

Algorithms and AI-powered networks are not 100 per cent reliable. There is a cost to intervening on a true item by mistake as well as not intervening on a fake item.

To that end, we designed a smart intervention policy that automatically decides whether to intervene on an item based on its predicted truthiness and predicted popularity.

Countering fake news

Launching counter-campaigns to minimize if not neutralize the effects of misinformation campaigns needs to factor in the major differences between truth and fake news in terms of how quickly and extensively each of them spreads.

Besides these differences, reactions to stories can vary depending on the user, topic and length of the post. Our approach takes all these factors into account and devises an efficient counter campaign strategy that effectively mitigates the propagation of misinformation.

Recent advances in generative AI, particularly those powered by large language models such ChatGPT, make it easier than ever to create articles at great speed and significant volume, raising the challenge of detecting misinformation and countering its spread at scale and in real time. Our current research continues to address this ongoing challenge which has enormous societal impact.The Conversation

Laks V.S. Lakshmanan, Professor of Computer Science, University of British Columbia

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association