ΑΙhub.org
 

Interview with Ali Boyle – talking AI, cognitive sciences and philosophy of mind

by
25 May 2021



share this:
Ali Boyle

Ali Boyle is currently a Research Fellow in Kinds of Intelligence at University of Cambridge and University of Bonn. Her main research field is philosophy of mind and psychology, focusing particularly on nonhuman minds and the methods used to study them. She holds a PhD in Philosophy from University of Cambridge. In this interview, we talk about artificial intelligence, cognitive sciences and philosophy of mind.

Can you briefly explain your research interests and their relation to AI?

My research focuses on theoretical questions about nonhuman minds: what are nonhuman minds like, and how can we learn about them? I was drawn to ask questions about nonhuman minds because, like all philosophers of mind, I’m interested in understanding the mind and mental phenomena. But traditionally, philosophy of mind has often focussed on human minds – which is obviously problematic, because most minds are not human! Focusing on human minds can lead to unduly anthropocentric, often over-intellectualist, accounts of mental phenomena.

Initially, I focussed on the minds of nonhuman animals. Comparative cognition is fascinating to a philosopher, because it gives rise to lots of entrenched debates with deep theoretical roots. For instance, I’m especially interested in whether animals have episodic memory. But the best evidence for episodic memory in humans is that they’re conscious of it and can tell us about it. This raises some interesting and difficult questions: How can we find out about episodic memory in animals, given that they don’t speak? Can we ever know anything about an animals’ conscious experience? And if animals did have episodic memory, how similar should we expect it to be to our own?

But this also led me to be interested in artificial agents, as they have, at least potentially, another kind of mind. Do artificial agents have the cognitive capacities we do? How could we tell if they did? There are clearly differences between artificial and biological agents – but which of these matter, when it comes to drawing comparisons between minds? By answering these questions, I think we stand to learn more about what minds are or could be like.

Can you tell us what methods or insights philosophy of mind can offer for cognitive sciences or AI?

I think there are a few ways in which philosophers can contribute to work in these areas.

Philosophy is an old and very wide-ranging discipline – and although philosophers do specialise, they can sometimes offer a new perspective on a problem by drawing on ideas from other areas of philosophy, or from its history. As well as this, philosophers often make use of thought experiments – imagined scenarios we can use to help us understand concepts or test the strength of theories. I think that the skills involved in constructing these thought experiments can be useful when it comes to experimental design in cognitive science and AI.

Perhaps most importantly, philosophers are trained to construct and analyse arguments, detect ambiguities and flawed patterns of reasoning, draw abstract distinctions and so on. These tools can be useful when it comes to understanding and resolving the kinds of theoretical debates that arise in cognitive science and AI. For instance, suppose we’re interested in whether animals can ‘read minds’ or whether GPT-3 understands language. These kinds of questions quickly take us into theoretical territory, and it’s here that philosophical tools can help.

What are, if any, potential contributions of AI research that can add to philosophy of mind?

I’m sure there are many ways in which AI could contribute to philosophy of mind – far more than I could say here!

One thing I’m especially interested in what AI can tell us about the function of cognitive capacities. By building agents that have certain capacities and not others, we might learn about what each capacity contributes to a cognitive system, and about how different cognitive capacities interact.

I’m also interested in what AI might tell us about the methods used to detect cognitive capacities in animals, which are often subjected to criticism. My colleagues on the Animal-AI Olympics project have built a testbed in which artificial agents are tested using methods drawn from comparative psychology. Observing the behaviour of artificial agents in these testing scenarios could be a tool in understanding and evaluating these methods.

Can you explain to us what temporal cognition and episodic memory are?

I think of temporal cognition as including any cognitive capacity that involves an agent thinking about time, or deploying temporal concepts like ‘now’, ‘past’, ‘future’ or ‘-ago’. In that sense, temporal cognition could be quite a broad collection of things. But (following this paper) I think it’s useful to draw a distinction between temporal cognition, or thinking about time, with merely being sensitive to the passage of time. Here’s an example: a few hours after eating, my stomach typically begins to rumble and I’ll start thinking about food. So my stomach is sensitive to the passage of time, but this doesn’t involve temporal cognition: neither I nor my stomach are thinking about time in this scenario. On the other hand, if I suddenly realise that I haven’t eaten for a few hours and that I’ll soon be getting hangry, I am using temporal cognition.

Episodic memory is the kind of memory involved in remembering personally experienced past events. We often talk about this form of memory in terms of ‘reliving’ or ‘re-experiencing’ events – when you episodically remember something, you can ‘replay it in the mind’s eye’. Because episodic memory seems to provide agents with a cognitive connection to the past, it’s often taken to be a form of temporal cognition. I’ve argued that the relationship between the two is a bit more complicated: that episodic memory might be involved in temporal cognition in humans, but that there could be creatures with episodic memory who had no temporal concepts.

Do you think artificial agents (can potentially) have episodic memory?

There are a few ways we might think about a question like this. We might be speaking quite loosely, and asking whether there are any agents who have something that does at least some of the things episodic memory does. I think that there are – there are certainly some episodic-memory inspired architectures, which enable agents to store records of past events.

But I think it’s useful to restrict the term ‘episodic memory’ to forms of memory that don’t just loosely resemble biological episodic memory, but also work in a similar way. On this way of thinking, I’m not convinced that any artificial agents have episodic memory. A central characteristic of biological episodic memory is that it’s constructive: it doesn’t store a faithful record of the past, but creates plausible reconstructions of the past based on bits of stored information – and I’m not aware of any artificial agents with this kind of constructive memory for the past. (I might be wrong about that – if I am, I’d love to know!) But I certainly think it’s possible that an artificial agent could one day have episodic memory in this sense.

How far are we from making breakthroughs in artificial general intelligence, or to create a self-aware machine? Are you optimistic?

My guess is that we’re a way off from AGI. I think that general intelligence in biological systems is underpinned by what’s sometimes called common sense – which enables us to approach a wide range of problems we find really trivial, but which for AI can be very challenging. I don’t doubt that it’s possible to create artificial agents with common sense (my colleagues have written about this), but I think it remains a significant obstacle.

As for self-awareness, that’s another tricky one. There are already machines that can report on their own states – so in that limited sense self-aware machines already exist. But I think we generally mean something more substantial when we talk about whether machines are self-aware. One thing we might have in mind is whether they’re persons – whether they have reason and reflection, and the ability to think of themselves as ‘the same thinking thing in different times and places’, as Locke put it. Alternatively, we might just be asking whether they are conscious. I think that machines that are self-aware in either of these more robust senses are probably a way off. (But since creating self-aware machines has the potential to be ethically hazardous, I count that as an optimistic view!)

What are some key readings you recommend for those who are interested in understanding the way non-human brains work?

I keep coming back to Mary Midgley’s Beast and Man, which I read when I was just starting out in philosophy. It was first published in the 70s, so of course the scientific and philosophical context has changed a lot. But many of Midgley’s arguments about how we ought to think about animal minds are timeless. For something more contemporary, Kristin Andrews’ recent book How to Study Animal Minds is a great introduction to some of the theoretical issues surrounding the study of nonhuman minds – especially the worry that bias might infect our thinking about animals. And Michael Tye’s Tense Bees and Shell-Shocked Crabs is a nice introduction to arguments about consciousness in animals and artificial beings.




Anil Ozdemir is a postgraduate research associate in the Dept. of Computer Science at the University of Sheffield, UK.
Anil Ozdemir is a postgraduate research associate in the Dept. of Computer Science at the University of Sheffield, UK.




            AIhub is supported by:


Related posts :



Unmasking AlphaFold to predict large protein complexes

“We’re giving a new type of input to AlphaFold. The idea is to get the whole picture, both from experiments and neural networks, making it possible to build larger structures."
03 December 2024, by

How to benefit from AI without losing your human self – a fireside chat from IEEE Computational Intelligence Society

Tayo Obafemi-Ajayi (Missouri State University) chats to Hava T. Siegelmann (University of Massachusetts, Amherst)

AIhub monthly digest: November 2024 – dynamic faceted search, the kidney exchange problem, and AfriClimate AI

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
29 November 2024, by

Improving calibration by relating focal loss, temperature scaling, and properness

In ML classification tasks, achieving high accuracy is only part of the goal; it's equally important for models to express how confident they are in their predictions.
28 November 2024, by

The Good Robot podcast: art, technology and justice with Yasmine Boudiaf

In this episode, Eleanor and Kerry chat to Yasmine Boudiaf, a researcher, artist and creative technologist who uses technology in beautiful and interesting ways to challenge and redefine what we think of as 'good'.
27 November 2024, by

AI in cancer research & care: perspectives of three KU Leuven institutes

This story is a collaboration of three Institutes that are working at the intersection of cell research, cancer research and care, and artificial intelligence.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association