ΑΙhub.org
 

AIhub coffee corner: AI images

by
07 April 2022



share this:
AIhub coffee corner

The AIhub coffee corner captures the musings of AI experts over a short conversation. This month, we delve into the topic of AI images.

The representation of AI in the media has long been a problem, with blue brains, white robots, and flying maths – usually completely unrelated to the content of the article – featuring heavily. Not too long ago, the team at Better images of AI released a gallery of free-to-use images which they hope will increase public understanding around the different aspects of AI, and enable more meaningful conversations.

Joining the discussion this time are: Sabine Hauert (University of Bristol), Michael Littman (Brown University), Carles Sierra (CSIC), Anna Tahovska (Czech Technical University) and Oskar von Stryk (Technische Universität Darmstadt).

Sabine Hauert: There are lots of aspects we can consider when thinking about AI images. For example, how can we source or design better images for AI? How should AI be represented pictorially in articles, blogs etc? What’s the problem with images in AI? What do we need to consider when thinking about portraying AI in images?

Oskar von Stryk: Another question to consider is: what is the purpose of the image, and the context in which the image appears? I think this makes a big difference actually. Some things need to be contextualised, we need to consider the purpose of the article, and so on.

My other comment is that, in my experience with the media, 50% of the time they report technically incorrectly, or at least partially incorrectly. This seems to be a kind of “law of nature”, an invariant. It seems very often always the case. As a result, the only difference that you care about is whether an article portrays a positive or a negative attitude towards the AI topics mentioned. I always say, “OK, I don’t care too much about the incorrectness from a scientific point of view, as it seems quite unavoidable; if it’s a positive mood I can go with it”. So I think we need contextualisation, to determine whether the picture is useful.

Carles Sierra: In terms of designing images, I was thinking about a similar concept to a hackathon but for a design school. Teams of designers, or individual designers, could propose images which represented different views or concepts within AI. It could be connected to an award. I would approach young people in design schools with concrete proposals, and have those as the object of the hackathon.

Sabine: Do you have an idea of the concepts we are missing?

Carles: I mean, we need to think about what kind of AI we are representing. Maybe solving a particular problem, or explaining a problem and some of the techniques that are being used for that. And then, after we give the designers a short explanation of that concept, we ask them to bring back some designs.

Sabine: With robotics it’s slightly easier because you can show a robot, or you can show a robot doing something. The AI one is a challenge because a lot of it is abstract. It could be that a lot of these images are slightly abstract. Would the media pick those up as something they use for their articles? Or, do we need more people in our AI images?

I was recently trying to find pictures for a report that we’re working on and I was desperately looking for pictures of people using robots for applications, and it’s really hard to get images that include the people plus the technology. You either have an abstract technology, or you have the application. You never really have that interface. So, maybe we need to stage this – photographers that spend a week taking photos of people working with the technology.

Oskar: What I actually like are comics – short cartoons which have two or three elements and a small conversation which points out something very clearly or even drastically. I have collected a number of these. They can portray a point very well. Again, what’s the purpose? If it’s a journalist writing an article about an aspect of AI then of course they look for a picture that’s attractive to a general audience, just to get them attracted to the article, no matter if the relation to the article is relevant or not. From a more scientific point of view, for scientifically oriented contexts, I like these cartoons which really highlight key issues.

Sabine: Schematics to explain the concepts then. Maybe we need some better schematics just to explain the basic concepts of AI.

Sabine: What are the challenges you face as a researcher? If a journalist needs a pretty picture of your own research to put at the top, what do you usually send them?

Oskar: Sometimes I have photographers come to my lab and we take nice pictures of the robots and people. The problem with robots is that people look at the hardware and don’t see the software which makes the intelligence. So, I always try to make the software more visible – usually this typically involves using big screens where we visualise the inside of the robot’s “brain”, for example. We show the localisation and how the environment is perceived, and so on.

Michael Littman: I was going to say graphs because that’s how I want to communicate. But, that’s not great…

Sabine: Maybe it’s not impossible to show a graph. We just need someone who’s an expert in data visualisation who could make them look really pretty. In the way that it looks almost like a picture. Maybe there’s ways we can beautify figures so that they are acceptable as an image in the media.

Anna Tahovska: In our institute we are lucky because we have a graphical designer employed here. So, we can put them in touch with the researchers and they can discuss the topic and she can create graphics or photographs. It’s great for us because we run a lot of projects and these have a lot of graphical elements. Also, there are a lot of articles we need images for, so it’s very beneficial to have something like this in-house.

Sabine: The New York Times does this with their articles. They have an artist who makes really abstract pictures for these articles, that can represent just a little bit of it, but it does the job. More artist engagement is a good idea.

Oskar: Actually, graphs can be interesting as well. For example, see the work of David Kriesel, who was a former member of a RoboCup team in the Humanoid league. He was the one who detected this famous xerox scanner error, and he’s also invited as a speaker at Chaos Computer Club. He does data analysis of lots of things, for example he’s looked at Coronavirus data, and he did an analysis of the German train company, the Deutsche Bahn. He has postings on LinkedIn which are very highly rated. His talks on YouTube on data analysis get many views, so I think if you combine data with interesting insights and conclusions, you can make it attractive to a large audience.

Sabine: Anything we should ban? Brains, the Terminator…

Oskar: When I talk to a general audience about robots, it’s a good sign if they think about industrial robots, but usually they think about the Terminator. And if it’s not terminating their life, it’s terminating their workplace, they may fear.

Sabine: I have noticed robotics being used a lot as a portrayal for AI even if the topic has nothing to do with robotics. I always find that interesting because there is a bit of a separation between robotics and AI depending on what field of AI you’re looking at. And yet, the robots get used a lot as images. I guess because it’s a bit more visual.

Sabine: Any final thoughts on how we could source good images?

Carles: I agree with Anna. I think we should approach graphic designers and schools and give them a purpose – it could be a final year assignment to get a variety of images.

Oskar: Maybe we could get a list of key statements where there are typically misunderstandings around AI and robotics. We could explain the background to the designers, and they could come up with a graphical visualisation.



tags:


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association