ΑΙhub.org
podcast
 

New voices in AI: philosophy, cognitive science and AI with Dimitri Coelho Mollo

by
24 August 2022



share this:

Welcome to episode 8 of new voices in AI

Dimitri Coelho Mollo shares his work on philosophy, cognitive science and AI.

You can find out more about his work on his site here.

All episodes of New voices in AI are available here

The music used is ‘Wholesome’ by Kevin MacLeod, Licensed under Creative Commons



transcript

Daly: Hello, and welcome to new voices in AI, the series from AI hub where we speak to students, early career researchers, and those with a new perspective on AI.

—-

Daly: First of all, thank you so so much for joining us today. And could you introduce yourself to this sort of where you are and who you are?

Coelho Mollo: Right of course. So yeah, thanks for having me. I’m Dimitri Coelho Mollo, I’m assistant professor in philosophy at Umeå university in Sweden. And I work on philosophy of cognitive science philosophy of AI, some philosophy of mind and philosophy of biology. So I have a pretty wide range of interests.

Daly: It’s a pretty pretty wide range. Yeah, maybe could you possibly expand a little bit more. So so a lot of a lot of philosophy, a lot of things happening there. Could you maybe expand a little bit on what you’re kind of working on currently?

Coelho Mollo: Right cool, so I think the shortest way to see the kind of thing I do is in terms of a philosophy of science, right? So it can be a philosophy of science and try to figure out from one side how science works, but also try to figure out well given a certain problem that exists in science how can we perhaps solve it or help solve it conceptually or theoretically? And I do a little bit of both. Directed though, so targeted, especially at AI and cognitive science.
So my questions, the questions I work on like what are the foundational notions? You know the basic notions in cognitive science in AI. What role are they playing in the sciences. How can we, you know better explain them how can we try capture come up with better ways of understanding them so that, you know, we can make progress in AI and cognitive science. So like to give a more concrete example so both in AI and cognitive science, people talk a lot about computation, right? So computational processes and computational models and so on. But of course well, what is meant by computation, right?

So how can we make sense of the notion of computation in these fields of research? So this is one of the questions that I have worked on in the past in the past and also representation intelligence, so all these you know, basic concepts that people throw around, but often you know don’t pay too much attention to – these are the kinds of things that I’m most interested in.

Daly: So when you describe something as being like foundational, it’s it’s kind of those like they say basic concepts, but it’s like what imagine like what does this kind of mean kind of breaking it down into its kind of, I guess smallest parts?

Coelho Mollo: Right, right. And also I mean not necessarily just, you know, looking at how people use it and just trying to make sense of it, but more, I mean because people use concepts and sometimes they don’t you know fully grasp, they just you know pick them up from, you know the practicing, practicing science and philosophy. So it’s not just you know, descriptive. It’s also a matter of figuring out, well, you know, given this particular scientific project, let’s say explaining, you know how a certain cognitive process works, and we see that explanations are given in terms of certain computations being performed by a certain part of the brain, right? And then we try to figure out, well, you know how can we describe those processes is, you know is using the term ‘computation’ useful is it not useful, does it lead to confusion? Are there better ways of understanding what’s going on and the same of course applies to intelligence, right? So when we talk about intelligence both in humans and non human animals, and in you know potential artificial systems. Uh, well, what do we mean by that? So what do we require something to do, or you know to be able to perform such that it counts as intelligent, right? So how can we say, well, our Ravens intelligent or you know, is GPT-3 intelligent right? There’s this large language model, so it’s difficult to ask those questions and you know, get meaningful answers unless we first try to figure out well what is it exactly that we are trying to capture when we talk about intelligence?

Daly: Yeah, it’s kind of interesting when you’re saying about sort of comparing human intelligence, animal intelligence and kind of AI intelligence and sort of makes me wonder if there’s kind of if that means different things for those different kind of domains as well.

Coelho Mollo: Right, so that’s that’s a very good question, and it’s kind of an open question. So there has been this tendency in many in many fields, not all, up until recently of seeing intelligence as something you know of which we are the paradigms, right? So we are, you know, the ground level of intelligence and everything else is measured in comparison to us.

Daly: Like we’re the benchmark?

Coelho Mollo: Yeah exactly, like you know what we do is the benchmark for intelligence. If you know other animals and the AI systems don’t, uh, you know, meet our capacities then they are not intelligent. And and if you think even about, you know the kinds of capacities that until recently lots of AI was interested in it was those kinds of things that we humans tend to think are markers of intelligence. Playing go, playing chess, you know, proving mathematical theorems and stuff like that, right? While all the things that we tend to think are, you know, not intelligence like just moving around and you know, uh, being able to look here, picking up things or turning open opening doors and so on so forth that we take not being intelligent. Then we ended up not giving much you know importance to those things in designing AI systems so trying to measure their capacities. I mean now this is changing, but it’s an interesting way to see you know more in the history of AI and cognitive science how notions of intelligence, stances and still are, you know, strongly shaped by our view of human intelligence, that then we try to impose on other things, and I think that’s problematic, so I think that’s and part of my work has been dedicated to coming up with characterizations of intelligence that can unify what’s going on in these different fields. Because as you, as you pointed out, there is a risk that you know what the comparative psychologist is talking about when they’re talking about what Ravens are doing, or what cockatoos are doing, as intelligent is very different from what you know, cognitive psychologist might be talking about when they talk about intelligence and talk about, you know IQ tests and stuff like that, so I think, there is lots of variation on what is meant, and I think one important project is that of trying to unify, you know, to find what is in common between those projects, because in the ends they are trying to zero in into something in some phenomenon in the world, that all these different systems you know, human nonhuman animal and human and artificial nonhuman can be instances of. So trying to unify these fields is one of the things that I try to do, and I think it’s very important then to have a notion of intelligence that can capture at least a large extent what the comparative psychologist, the AI researcher and the cognitive psychologist are interested in.

Daly: Some really, really kind of fascinating kind of overarching topic, how did you kind of get into AI because there’s all these elements being drawn in here, how did you get started?

Coelho Mollo: Right so, I think just to a certain extent this kind of approach in which you look at lots of things and try to bring them together. It’s maybe one of the, you know at least I see it as I see it, one of the main roles of philosophy, so I think that’s one of the the main contributions that philosophy can can make is exactly to give as you mentioned, these more you know abstracts more general framework that you know puts into relationship fields that may not communicate together and that may think they are doing different things and then provide you know some conceptual clarification, some unification, help with theoretical advances and so on so forth. So I was always interested in this, you know into this interdisciplinary view of philosophy since the beginning. But I actually started looking mostly on philosophy of perception. And so trying to figure out, well what’s going on when we perceive the world. How can we make sense of the relationships between, you know what we experience and what’s going on in the brain, how it can make sense of the fact that you know we use the information we get from perception to then guide our behavior. So anyways, from there I moved more and more towards philosophy of cognitive science and then I mean philosophy, cognitive science and AI. I mean cognitive science, and they are more generally kind of sister disciplines, right? So they kind of were born together, especially at the beginning, there was lots of interaction between people doing AI and people doing cognitive science they share lots of those foundational notions like computation, representation, and so on. So they are projects that share a lot, right? So scientific fields that share a lot and I started getting more and more interested about well, how and whether can AI shed lights on cognition and the other way around, right?
And also one thing that I think is very interesting when it comes to AI, is the fact that it opens up possibilities for cognition and intelligence that may be very unlike those that we see in biological organisms, right? So we are talking about systems that have that are very different from, you know, biological systems in many, many different ways. So they may offer us some answers about well which other possible solutions are there? You know, this space of solutions should the kinds of things that we think require cognition that require intelligence and so on.
So then yeah, little by little from perception I move to cognitive science and then to to AI, but always keeping trying to keep all of that together anyways.

Daly: Yeah, it’s kind of I think something that we’ve had I think before in the series is people saying often there are fields that aren’t things that you would immediately think of being linked to to AI things like art, but like similarly philosophy, like you say, there’s actually there’s quite a lot of overlap there in that kind of cognitive science kind of way of looking at it. There’s definitely a lot that they can kind of learn from one another.

Coelho Mollo: Yeah, and there are some debates or some questions now in AI that are actually philosophical questions, even very old ones. So lots of the discussion in AI right now is about, well, can we have systems that can do intelligent stuff based only on what they learn from experience, or do we need to have some sort of innate set of representations or capacities, where innate of course is hard coded in the case of the AIs, or not. So this actually rehearses in a more, you know, empirical flavor discussions that have been going on in philosophy for centuries, between, in this case, empiricist and rationalists. This and similarly discussions that have occupied them do or could occupy developmental psychologists. So you know to what extent in humans, most of what what the capacities we have come from just learning from experience. How much does it come from things that are there from the get go right? So these are all discussions that you know, come from philosophy, then take a life of their own in AI and related fields.

Daly: Yeah, we’ve talked a little bit about how you get into how you got into AI and sort of, I guess my next question is where? Where do you see things kind of going in future or where do you kind of hope that some of your research could be used in the future?

Coelho Mollo: Right, well that that’s always a difficult question to ask to a philosopher, right? Because we are always working on, uh, more abstract, uh, like Birds Eye view level. I mean not always but many times, what I think, my research can contribute is, for example, shedding lights on you know what we require for something to be able to do for it to count as intelligent, but also and more concretely being clearer about that also allows us to be clearer about which kinds of methods should we use in measuring and studying AI systems, right? So one thing that I’m working on right now is take these large language models, right? So GPT3, PALM, and so on so forth. How can we understand what they are doing right?
Their outputs are very impressive. But in light of the outputs, what can we say about what’s going on? Can we say that you know the things they the outputs they produce, are they meaningful? Are they displaying some kind of reasoning capacity? How can we try to figure out and test whether this is the case and how can we even make sense of what’s going on there, right? Because these systems are very unlike anything else that we see in nature, right? Which concepts do we need? Which explanatory tools do we need to best make sense of what’s going on in these cases that are quite novel and quite unprecedented. So these are the things that I you know, try to help with of course they are more at the theoretical level, but they can also have direct you know consequences for more practical issues. For example, one thing that that I’ve done that connects with this we’ve I’ve done it with several colleagues was you create the benchmark tapping into the capacity of large language models to combine concepts. So if you take you know uh, two concepts together, sometimes by putting them together they mean something different. Right, so one classical example in philosophy is pet fish, right? So if you take if you think about, you know your pragmatic case of pets, it’s going to probably be dog, cat. If you take your pragmatic case of fish is going to be I don’t know tuna and so forth, but then when you think about pet fish. Then you’re going to be talking about, you know Uh, those, uh, you know

Daly: Goldfish or something?
Coelho Mollo: Goldfish, yeah exactly. And things like that, right? So in this case we have a combination of two concepts that generates a kind of different conceptual understanding, right? And we’re interested in with these colleagues, colleagues at finding out whether these large language models can also do that, so we came up with a benchmark for tapping into that and measuring the capacity of models should do that, not only with you know every day combined concepts like pet fish but also with like invented concepts or, you know, concepts that are unlike, or refer to ways of in which the world might be that are not the actual one and to see well is the system is the model going to be able to figure out what the combined meaning is, given that the parts are, you know the contributions that the paths make, and you know the background knowledge that is required to make to make sense of that right?
So if you think about again pet fish, you need lots of background knowledge to figure out well, you know I’m probably meaning, uh, goldfish, and not tuna or not cats, right? Or anything, or catfish or anything like that, right? So you require lots of knowledge about you, know practices and things that are typical and not typical and so forth.

Daly: I guess it needs like a that background knowledge is kind of a sense of culture as well in terms of sort of what’s the pet in one places and in others and things like that. I can imagine this it’s not an easy issue to figure out. It actually kind of links a little bit and to, well, I think potentially linked to one of the questions from our previous new voices in AI and relating to artificial general intelligence. And I can kind of imagine that some of your work could potentially help in unpicking what is a generally intelligent AI.
So our previous new voice and AI, Chris Emezue asked about, so we often have these kind of media where we see super advanced robots and we’re not talking about Terminator very specifically, that kind of thing that kind of working alongside robots with these kind of general Intelligence and his question was some sort of in the next 20 years or so at our kind of current rate of progress, do you think that we might be able to reach that kind of level? And what are kind of your thoughts on that in general?

Coelho Mollo: Right, so I mean my first thought is that it’s very difficult to make predictions in in this area, right? So if you look at the history of AI, there were lots of very optimistic predictions already. You know, in the the 50s and 60s people thought well in 10 years time or even fewer years we’re going to be able to already have an artificial narrow intelligence system. And of course that didn’t pan out. So let’s say there were lots of wrong predictions in the past in the field and we still see some of that going on today, right, uh? So, uh, I’m always a bit wary of making predictions. One thing though, that I can say is that I think we are still quite far from understanding what is needed for general intelligence, so I think we have very good or quite good, grasp on how to solve some tasks, especially tasks that either require, just you know, relatively simple reasoning over a limited amount of information, using know a good old fashioned AI or symbolic AI, and we have a good grasp on how to solve pattern recognition tasks like, you know, object recognition and you know language, translation and stuff like that for which we use the deep learning or the deep networks, right?
However, is that any of that going to be sufficient for general intelligence? I don’t think so, right? I think they both give us part of the story of the kinds of capacities that are needed for intelligence, but it’s not the full story. So we may need something fundamentally new. Which might be in other ways of trying to combine the issue. But might also be, completely novel kinds of architectures, and in learning regimes, I think we just don’t know. What I’m more skeptical about is whether what we have right now is going to be enough, perhaps by just, you know, increasing scale and building bigger and bigger models.
I think there is still quite some progress that can be done by making tweaks and perhaps changing some features of some architectures and increasing the size of these systems, but I do think we may need something quite novel in order to get artificial intelligence and we don’t know yet what that thing might be. So yeah, so I mean 20 years time seems I think too short for us to make that much progress. So I would be surprised if we got there in 20 years time. But again, I don’t want to make any, you know, clear predictions, uh, just not to you know repeats the mistakes that have been done in the past in the field.

Daly: Yeah, I feel like I could ask when do you think you’ll get there but I feel like, like you say there’s no there’s no good answer to that question.

Coelho Mollo: Yeah, exactly. I mean, we just don’t know right? And you know, it’s like there are lots of things that we don’t even know that we don’t know, right? Unknown unknowns so it’s very difficult to make predictions and try to understand where the field should go given the you know amount of ignorance, that’s still that we are still in right? So yeah, as you said, it’s very tricky to make those predictions.

Daly: Yeah yeah, so penultimate question is what would you like to ask the next interviewee in the series?

Coelho Mollo: Right, so in general, right? So I don’t know who that will be.

Daly: Literally anything.

Coelho Mollo: I see I see so perhaps I would ask them how do they see the relationships between the existing approaches to AI with the – I mean, and here I’m talking, especially about, you know, the Old-fashioned symbolic AI, neural networks, and embodied robotics, like people like Rodney Brooks have working on and have been working on. And I would be curious to you know have some opinions on whether they are, you know, completely independent approaches that you know are just competitors that can’t really be related to each other, or whether they can be brought together in ways that might be promising.

Daly: It’s gonna be – I always say this but there’s always really, really interesting questions that people pose for the next person. As always, I would be very very intrigued to see how people, think about that because yeah, that’s definitely an interesting one.

Daly: And and very finally, where can we find out more about your work online otherwise?

Coelho Mollo: Right, so I have a, uh, a website, so my researchers are there also the you know conference presentation, so public philosophy initiatives that they participate on. So if you just Google my name, you can find it online with also links to papers and things like that.

Daly: Brilliant!. Cool, and as always we will have the links to that as well on the website. And yeah, thank you so so much for joining us today. It was always really fascinating discussions. I’m going to be thinking about basic concepts and to actually understand anything.

Coelho Mollo: It was a pleasure. Yeah, it was great talking to you.

Daly: And finally thank you for joining us today for this episode, if you would like to find out more about the series or anything mentioned do join us on AIhub, and until next time goodbye for now.



transcript



tags:


Joe Daly Engagement Manager for AIhub
Joe Daly Engagement Manager for AIhub




            AIhub is supported by:


Related posts :



Five ways you might already encounter AI in cities (and not realise it)

Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.
13 December 2024, by

#NeurIPS2024 social media round-up part 1

Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.
12 December 2024, by

Congratulations to the #NeurIPS2024 award winners

Find out who has been recognised by the conference awards.
11 December 2024, by

Multi-agent path finding in continuous environments

How can a group of agents minimise their journey length whilst avoiding collisions?
11 December 2024, by and

The Machine Ethics podcast: Diversity in the AI life-cycle with Caitlin Kraft-Buchman

In this episode Ben chats to Caitlin about gender and AI, using technology for good, lived experience expertise, co-creating technologies, international treaties on AI, and more...
11 December 2024, by

Call for AI-themed holiday videos, art, poems, and more

Send us your festive offerings!
06 December 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association