ΑΙhub.org
 

AI UK: discussing the role and impact of science journalism


by
24 March 2023



share this:
AI UK 2023 logo

Hosted by the Alan Turing Institute, AI UK is a two day conference that showcases artificial intelligence and data science research, development, and policy in the UK. This year, the event took place on 21 and 22 March, and the theme was the use of data science and AI to solve real-world challenges.

Given AIhub’s mission to connect the AI community to the general public, and to report the breadth of AI research without the hype, the panel session on science journalism particularly piqued my interest. Chaired by science journalist Anjana Ahuja, “Impacting technology through quality science journalism” drew on the opinions of Research Fellow Mhairi Aitken, science reporter Melissa Heikkilä, and writer, presenter and comedian Timandra Harkness.

The speakers talked about the role that journalism has to play in understanding AI systems and their implementation in wider society. They shone a light on some of the failings in AI reporting, and considered how we can go about improving this.

When done well, journalism can help the public understand what AI is, and what it isn’t, and explain what the technology is capable of today, and what it might be capable of in the future. Unfortunately, much of the copy in the mainstream media is sensationalised, providing unrealistic expectations and detracting from many of the pressing issues that need to be addressed right now.

The panel considered the role of journalism in holding to account those who push AI. The concentration of power in the hands of a few companies, and the low level of transparency with regards to the workings and development process of products, is something that more journalists should be investigating in detail. It was noted that we, as a society in general, are incredibly reverent to companies producing AI tech as compared to other systems and products. The level of scrutiny is much lower than in other industries or sectors. It is important to have journalists who understand the implications of the technology and that are prepared to do the necessary investigative work to hold companies accountable.

There was a general concern that big tech companies are controlling the narrative. Many articles report the latest shiny toy, and take at face value the hyped-up press releases from the companies themselves, rather than investigating the systems and their implications in depth. A fundamental problem is that there is a tendency to focus on the AI system, and its capabilities, rather than the decisions of those in power. This shifts the responsibility from a person to an algorithm. Journalists should be asking questions such as: In what context are these systems being deployed? Who decided that it was acceptable to use an algorithm for this particular decision making system / task? Why did they make that decision?

As an example, the speakers pointed to coverage of the recent releases of large language models. Aside from gushing at some of the outputs, most of the talk was around how students were going to use them to cheat in exams. There was very little content about the systems themselves, how they were developed, where the training data came from and how they were labelled, and whether the process was ethical. There was also a narrative of inevitability, that these systems are in the wild and there’s nothing we can do about it. It is a key role of journalists to challenge that narrative.

The panel offered some thoughts on how AI coverage could be improved. One suggestion was that journalists approach AI reporting more like political reporting, holding a political lens to new technology developments and their potential consequences. The reporting should also be expanded beyond the technology to include political and ethical issues. This makes the topic relevant to everybody and opens up the conversation to a larger audience. In addition, journalism that considers a broad range of perspectives and backgrounds leads to more rounded coverage.

Another point was that journalists need to be more inquisitive and not be afraid to ask telling questions. There seems to be a reluctance to cover topics involving more technical concepts, but not being able to code shouldn’t be a barrier to asking probing questions.

From a research perspective, practitioners should be more proactive in engaging with the media. As well as being available to talk to the press, researchers can write blogs and comment pieces to get their voice out there. Public engagement and education surrounding the field are two other areas to which researchers can contribute.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Interview with Nisarg Shah: Understanding fairness in AI and machine learning

  05 Feb 2025
Hear from the winner of the 2024 IJCAI Computers and Thought Award.

Stuart J. Russell wins 2025 AAAI Award for Artificial Intelligence for the Benefit of Humanity

  04 Feb 2025
Stuart will give an invited talk about his work at AAAI 2025.

Forthcoming machine learning and AI seminars: February 2025 edition

  03 Feb 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 3 February and 31 March 2025.

Hanna Barakat’s image collection & the paradoxes of depicting diversity in AI history

  31 Jan 2025
Read about Hanna's artistic process and reflections upon creating new images about AI

A deep learning pipeline for controlling protein interactions

  30 Jan 2025
Scientists have used deep learning to design new proteins that bind to complexes involving other small molecules like hormones or drugs.
monthly digest

AIhub monthly digest: January 2025 – artists’ perspectives on GenAI, biomedical knowledge graphs, and ML for studying greenhouse gas emissions

  29 Jan 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Public competition for better images of AI – winners announced!

  28 Jan 2025
See the winning images from the Better Images of AI and Cambridge Diversity Fund competition.

Translating fiction: how AI could assist humans in expanding access to global literature and culture

  27 Jan 2025
Dutch publishing house Veen Bosch & Keuning (VBK) has confirmed plans to experiment using AI to translate fiction.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association