Hosted by the Alan Turing Institute, AI UK is a two day conference that showcases artificial intelligence and data science research, development, and policy in the UK. This year, the event took place on 21 and 22 March, and the theme was the use of data science and AI to solve real-world challenges.
Given AIhub’s mission to connect the AI community to the general public, and to report the breadth of AI research without the hype, the panel session on science journalism particularly piqued my interest. Chaired by science journalist Anjana Ahuja, “Impacting technology through quality science journalism” drew on the opinions of Research Fellow Mhairi Aitken, science reporter Melissa Heikkilä, and writer, presenter and comedian Timandra Harkness.
The speakers talked about the role that journalism has to play in understanding AI systems and their implementation in wider society. They shone a light on some of the failings in AI reporting, and considered how we can go about improving this.
When done well, journalism can help the public understand what AI is, and what it isn’t, and explain what the technology is capable of today, and what it might be capable of in the future. Unfortunately, much of the copy in the mainstream media is sensationalised, providing unrealistic expectations and detracting from many of the pressing issues that need to be addressed right now.
The panel considered the role of journalism in holding to account those who push AI. The concentration of power in the hands of a few companies, and the low level of transparency with regards to the workings and development process of products, is something that more journalists should be investigating in detail. It was noted that we, as a society in general, are incredibly reverent to companies producing AI tech as compared to other systems and products. The level of scrutiny is much lower than in other industries or sectors. It is important to have journalists who understand the implications of the technology and that are prepared to do the necessary investigative work to hold companies accountable.
There was a general concern that big tech companies are controlling the narrative. Many articles report the latest shiny toy, and take at face value the hyped-up press releases from the companies themselves, rather than investigating the systems and their implications in depth. A fundamental problem is that there is a tendency to focus on the AI system, and its capabilities, rather than the decisions of those in power. This shifts the responsibility from a person to an algorithm. Journalists should be asking questions such as: In what context are these systems being deployed? Who decided that it was acceptable to use an algorithm for this particular decision making system / task? Why did they make that decision?
As an example, the speakers pointed to coverage of the recent releases of large language models. Aside from gushing at some of the outputs, most of the talk was around how students were going to use them to cheat in exams. There was very little content about the systems themselves, how they were developed, where the training data came from and how they were labelled, and whether the process was ethical. There was also a narrative of inevitability, that these systems are in the wild and there’s nothing we can do about it. It is a key role of journalists to challenge that narrative.
The panel offered some thoughts on how AI coverage could be improved. One suggestion was that journalists approach AI reporting more like political reporting, holding a political lens to new technology developments and their potential consequences. The reporting should also be expanded beyond the technology to include political and ethical issues. This makes the topic relevant to everybody and opens up the conversation to a larger audience. In addition, journalism that considers a broad range of perspectives and backgrounds leads to more rounded coverage.
Another point was that journalists need to be more inquisitive and not be afraid to ask telling questions. There seems to be a reluctance to cover topics involving more technical concepts, but not being able to code shouldn’t be a barrier to asking probing questions.
From a research perspective, practitioners should be more proactive in engaging with the media. As well as being available to talk to the press, researchers can write blogs and comment pieces to get their voice out there. Public engagement and education surrounding the field are two other areas to which researchers can contribute.