ΑΙhub.org
 

AI UK: discussing the role and impact of science journalism


by
24 March 2023



share this:
AI UK 2023 logo

Hosted by the Alan Turing Institute, AI UK is a two day conference that showcases artificial intelligence and data science research, development, and policy in the UK. This year, the event took place on 21 and 22 March, and the theme was the use of data science and AI to solve real-world challenges.

Given AIhub’s mission to connect the AI community to the general public, and to report the breadth of AI research without the hype, the panel session on science journalism particularly piqued my interest. Chaired by science journalist Anjana Ahuja, “Impacting technology through quality science journalism” drew on the opinions of Research Fellow Mhairi Aitken, science reporter Melissa Heikkilä, and writer, presenter and comedian Timandra Harkness.

The speakers talked about the role that journalism has to play in understanding AI systems and their implementation in wider society. They shone a light on some of the failings in AI reporting, and considered how we can go about improving this.

When done well, journalism can help the public understand what AI is, and what it isn’t, and explain what the technology is capable of today, and what it might be capable of in the future. Unfortunately, much of the copy in the mainstream media is sensationalised, providing unrealistic expectations and detracting from many of the pressing issues that need to be addressed right now.

The panel considered the role of journalism in holding to account those who push AI. The concentration of power in the hands of a few companies, and the low level of transparency with regards to the workings and development process of products, is something that more journalists should be investigating in detail. It was noted that we, as a society in general, are incredibly reverent to companies producing AI tech as compared to other systems and products. The level of scrutiny is much lower than in other industries or sectors. It is important to have journalists who understand the implications of the technology and that are prepared to do the necessary investigative work to hold companies accountable.

There was a general concern that big tech companies are controlling the narrative. Many articles report the latest shiny toy, and take at face value the hyped-up press releases from the companies themselves, rather than investigating the systems and their implications in depth. A fundamental problem is that there is a tendency to focus on the AI system, and its capabilities, rather than the decisions of those in power. This shifts the responsibility from a person to an algorithm. Journalists should be asking questions such as: In what context are these systems being deployed? Who decided that it was acceptable to use an algorithm for this particular decision making system / task? Why did they make that decision?

As an example, the speakers pointed to coverage of the recent releases of large language models. Aside from gushing at some of the outputs, most of the talk was around how students were going to use them to cheat in exams. There was very little content about the systems themselves, how they were developed, where the training data came from and how they were labelled, and whether the process was ethical. There was also a narrative of inevitability, that these systems are in the wild and there’s nothing we can do about it. It is a key role of journalists to challenge that narrative.

The panel offered some thoughts on how AI coverage could be improved. One suggestion was that journalists approach AI reporting more like political reporting, holding a political lens to new technology developments and their potential consequences. The reporting should also be expanded beyond the technology to include political and ethical issues. This makes the topic relevant to everybody and opens up the conversation to a larger audience. In addition, journalism that considers a broad range of perspectives and backgrounds leads to more rounded coverage.

Another point was that journalists need to be more inquisitive and not be afraid to ask telling questions. There seems to be a reluctance to cover topics involving more technical concepts, but not being able to code shouldn’t be a barrier to asking probing questions.

From a research perspective, practitioners should be more proactive in engaging with the media. As well as being available to talk to the press, researchers can write blogs and comment pieces to get their voice out there. Public engagement and education surrounding the field are two other areas to which researchers can contribute.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.

Multi-agent path finding in continuous environments

and   11 Dec 2024
How can a group of agents minimise their journey length whilst avoiding collisions?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association