ΑΙhub.org
 

Generative AI is already being used in journalism – here’s how people feel about it


by
21 February 2025



share this:
stack of newspapers

By T.J. Thomson, RMIT University; Michelle Riedlinger, Queensland University of Technology; Phoebe Matich, Queensland University of Technology, and Ryan J. Thomas, Washington State University

Generative artificial intelligence (AI) has taken off at lightning speed in the past couple of years, creating disruption in many industries. Newsrooms are no exception.

A new report published this week finds that news audiences and journalists alike are concerned about how news organisations are – and could be – using generative AI such as chatbots, image, audio and video generators, and similar tools.

The report draws on three years of interviews and focus group research into generative AI and journalism in Australia and six other countries (United States, United Kingdom, Norway, Switzerland, Germany and France).

Only 25% of our news audience participants were confident they had encountered generative AI in journalism. About 50% were unsure or suspected they had.

This suggests a potential lack of transparency from news organisations when they use generative AI. It could also reflect a lack of trust between news outlets and audiences.

Who or what makes your news – and how – matters for a host of reasons.

Some outlets tend to use more or fewer sources, for example. Or use certain kinds of sources – such as politicians or experts – more than others.

Some outlets under-represent or misrepresent parts of the community. This is sometimes because the news outlet’s staff themselves aren’t representative of their audience.

Carelessly using AI to produce or edit journalism can reproduce some of these inequalities.

Our report identifies dozens of ways journalists and news organisations can use generative AI. It also summarises how comfortable news audiences are with each.

The news audiences we spoke to overall felt most comfortable with journalists using AI for behind-the-scenes tasks rather than for editing and creating. These include using AI to transcribe an interview or to provide ideas on how to cover a topic.

But comfort is highly dependent on context. Audiences were quite comfortable with some editing and creating tasks when the perceived risks were lower.

The problem – and opportunity

Generative AI can be used in just about every part of journalism.

For example, a photographer could cover an event. Then, a generative AI tool could select what it “thinks” are the best images, edit the images to optimise them, and add keywords to each.

An image of a field with towers in the distance and computer-generated labels superimposed that try to identify certain objects in the image.
Computer software can try to recognise objects in images and add keywords, leading to potentially more efficient image processing workflows.Elise Racine/Better Images of AI/Moon over Fields, CC BY

These might seem like relatively harmless applications. But what if the AI identifies something or someone incorrectly, and these keywords lead to mis-identifications in the photo captions? What if the criteria humans think make “good” images are different to what a computer might think? These criteria may also change over time or in different contexts.

AI can also make things up completely. Images can appear photorealistic but show things that never happened. Videos can be entirely generated with AI, or edited with AI to change their context.

Generative AI is also frequently used for writing headlines or summarising articles. These sound like helpful applications for time-poor individuals, but some news outlets are using AI to rip off others’ content.

AI-generated news alerts have also gotten the facts wrong. As an example, Apple recently suspended its automatically generated news notification feature. It did this after the feature falsely claimed US murder suspect Luigi Mangione had killed himself, with the source attributed as the BBC.

What do people think about journalists using AI?

Our research found news audiences seem to be more comfortable with journalists using AI for certain tasks when they themselves have used it for similar purposes.

For example, the people interviewed were largely comfortable with journalists using AI to blur parts of an image. Our participants said they used similar tools on video conferencing apps or when using the “portrait” mode on smartphones.

Likewise, when you insert an image into popular word processing or presentation software, it might automatically create a written description of the image for people with vision impairments. Those who’d previously encountered such AI descriptions of images felt more comfortable with journalists using AI to add keywords to media.

A screenshot of an image with the alt-text description that reads A view of the beach from a stone arch.
Popular word processing and presentation software can automatically generate alt-text descriptions for images that are inserted into documents or presentations. T.J. Thomson

The most frequent way our participants encountered generative AI in journalism was when journalists reported on AI content that had gone viral.

For example, when an AI-generated image purported to show Princes William and Harry embracing at King Charles’s coronation, news outlets reported on this false image.

Our news audience participants also saw notices that AI had been used to write, edit or translate news articles. They saw AI-generated images accompanying some of these. This is a popular approach at The Daily Telegraph, which uses AI-generated images to illustrate many of its opinion columns.

An overview of twelve opinion columns published by The Daily Telegraph and each featuring an image generated by an AI tool.
The Daily Telegraph frequently turns to generative AI to illustrate its opinion columns, sometimes generating more photorealistic illustrations and sometimes less photorealistic ones. T.J. Thomson.

Overall, our participants felt most comfortable with journalists using AI for brainstorming or for enriching already created media. This was followed by using AI for editing and creating. But comfort depends heavily on the specific use.

Most of our participants were comfortable with turning to AI to create icons for an infographic. But they were quite uncomfortable with the idea of an AI avatar presenting the news, for example.

On the editing front, a majority of our participants were comfortable with using AI to animate historical images, like this one. AI can be used to “enliven” an otherwise static image in the hopes of attracting viewer interest and engagement.

A historical photograph from the State Library of Western Australia’s collection has been animated with AI (a tool called Runway) to introduce motion to the still image. T.J. Thomson.

Your role as an audience member

If you’re unsure if or how journalists are using AI, look for a policy or explainer from the news outlet on the topic. If you can’t find one, consider asking the outlet to develop and publish a policy.

Consider supporting media outlets that use AI to complement and support – rather than replace – human labour.

Before making decisions, consider the past trustworthiness of the journalist or outlet in question, and what the evidence says.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University; Michelle Riedlinger, Associate Professor in Digital Media, Queensland University of Technology; Phoebe Matich, Postdoctoral Research Fellow, Generative Authenticity in Journalism and Human Rights Media, ADM+S Centre, Queensland University of Technology, and Ryan J. Thomas, Associate Professor, Washington State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

  08 Apr 2026
Francesco tells us how LLMs behave in the social network Moltbook, and what this reveals about network dynamics.

Scaling up multi-agent systems: an interview with Minghong Geng

  07 Apr 2026
We sat down with Minghong in the latest of our interviews with the 2026 AAAI/SIGAI Doctoral Consortium participants.

Forthcoming machine learning and AI seminars: April 2026 edition

  02 Apr 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 April and 31 May 2026.

#AAAI2026 invited talk: machine learning for particle physics

  01 Apr 2026
How is ML used in the search for new particles at CERN?
monthly digest

AIhub monthly digest: March 2026 – time series, multiplicity, and the history of RoboCup

  31 Mar 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

  30 Mar 2026
We launch our new series with a conversation with Ross King - a pioneer in the field of AI-enabled scientific discovery.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence