ΑΙhub.org
 

AI UK: discussing the role and impact of science journalism


by
24 March 2023



share this:
AI UK 2023 logo

Hosted by the Alan Turing Institute, AI UK is a two day conference that showcases artificial intelligence and data science research, development, and policy in the UK. This year, the event took place on 21 and 22 March, and the theme was the use of data science and AI to solve real-world challenges.

Given AIhub’s mission to connect the AI community to the general public, and to report the breadth of AI research without the hype, the panel session on science journalism particularly piqued my interest. Chaired by science journalist Anjana Ahuja, “Impacting technology through quality science journalism” drew on the opinions of Research Fellow Mhairi Aitken, science reporter Melissa Heikkilä, and writer, presenter and comedian Timandra Harkness.

The speakers talked about the role that journalism has to play in understanding AI systems and their implementation in wider society. They shone a light on some of the failings in AI reporting, and considered how we can go about improving this.

When done well, journalism can help the public understand what AI is, and what it isn’t, and explain what the technology is capable of today, and what it might be capable of in the future. Unfortunately, much of the copy in the mainstream media is sensationalised, providing unrealistic expectations and detracting from many of the pressing issues that need to be addressed right now.

The panel considered the role of journalism in holding to account those who push AI. The concentration of power in the hands of a few companies, and the low level of transparency with regards to the workings and development process of products, is something that more journalists should be investigating in detail. It was noted that we, as a society in general, are incredibly reverent to companies producing AI tech as compared to other systems and products. The level of scrutiny is much lower than in other industries or sectors. It is important to have journalists who understand the implications of the technology and that are prepared to do the necessary investigative work to hold companies accountable.

There was a general concern that big tech companies are controlling the narrative. Many articles report the latest shiny toy, and take at face value the hyped-up press releases from the companies themselves, rather than investigating the systems and their implications in depth. A fundamental problem is that there is a tendency to focus on the AI system, and its capabilities, rather than the decisions of those in power. This shifts the responsibility from a person to an algorithm. Journalists should be asking questions such as: In what context are these systems being deployed? Who decided that it was acceptable to use an algorithm for this particular decision making system / task? Why did they make that decision?

As an example, the speakers pointed to coverage of the recent releases of large language models. Aside from gushing at some of the outputs, most of the talk was around how students were going to use them to cheat in exams. There was very little content about the systems themselves, how they were developed, where the training data came from and how they were labelled, and whether the process was ethical. There was also a narrative of inevitability, that these systems are in the wild and there’s nothing we can do about it. It is a key role of journalists to challenge that narrative.

The panel offered some thoughts on how AI coverage could be improved. One suggestion was that journalists approach AI reporting more like political reporting, holding a political lens to new technology developments and their potential consequences. The reporting should also be expanded beyond the technology to include political and ethical issues. This makes the topic relevant to everybody and opens up the conversation to a larger audience. In addition, journalism that considers a broad range of perspectives and backgrounds leads to more rounded coverage.

Another point was that journalists need to be more inquisitive and not be afraid to ask telling questions. There seems to be a reluctance to cover topics involving more technical concepts, but not being able to code shouldn’t be a barrier to asking probing questions.

From a research perspective, practitioners should be more proactive in engaging with the media. As well as being available to talk to the press, researchers can write blogs and comment pieces to get their voice out there. Public engagement and education surrounding the field are two other areas to which researchers can contribute.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Machine learning for atomic-scale simulations: balancing speed and physical laws

How much underlying physics can we safely “shortcut” without breaking a simulation?

Policy design for two-sided platforms with participation dynamics: Interview with Haruka Kiyohara

  09 Oct 2025
Studying the long-term impacts of decision-making algorithms on two-sided platforms such as e-commerce or music streaming apps.

The Machine Ethics podcast: What excites you about AI? Vol.2

This is a bonus episode looking back over answers to our question: What excites you about AI?

Interview with Janice Anta Zebaze: using AI to address energy supply challenges

  07 Oct 2025
Find out more about research combining renewable energy systems, tribology, and artificial intelligence.

How does AI affect how we learn? A cognitive psychologist explains why you learn when the work is hard

  06 Oct 2025
Early research is only beginning to scratch the surface of how AI technology will truly affect learning and cognition in the long run.

Interview with Zahra Ghorrati: developing frameworks for human activity recognition using wearable sensors

  03 Oct 2025
Find out more about research developing scalable and adaptive deep learning frameworks.

Diffusion beats autoregressive in data-constrained settings

  03 Oct 2025
How can we trade off more compute for less data?

Forthcoming machine learning and AI seminars: October 2025 edition

  02 Oct 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 3 October and 30 November 2025.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence