ΑΙhub.org
 

AIhub coffee corner: AI and consciousness

by
04 May 2022



share this:
AIhub coffee corner

The AIhub coffee corner captures the musings of AI experts over a short conversation. This month, we get stuck into AI and consciousness.

This topic has long been much-discussed, and especially so recently with one tweet in particular sparking a debate online.

Joining the discussion this time are: Tom Dietterich (Oregon State University), Stephen Hanson (Rutgers University), Sabine Hauert (University of Bristol), Holger Hoos (Leiden University), Sarit Kraus (Bar-Ilan University) and Michael Littman (Brown University).

Stephen Hanson: So, the topic of consciousness has come up a lot recently in discussions on the Connectionists. There’s a lot of consciousness talk in terms of Yoshua Bengio’s story – this deep learning 2.0. This area of cognitive science was pretty much wiped out in the first five years of NeurIPS [Conference on Neural Information Processing Systems]. Neuroscience and cognitive science were two of the most popular categories, then algorithms and architectures became the most popular category.

Tom Dietterich: This discussion about consciousness really shows that the community has a hunger to talk more about cognitive science issues, even though they are not doing research on it. It’s a separate conversation around how NeurIPS, or another conference, can bring cognitive science back into the fold. Many machine learning people have an interest but no knowledge.

Stephen: Certainly Yoshua is doing that. His plenary talks are a pretty clear clarion call back to cognitive science, and consciousness is one of his big issues. And it does appear that people are paying attention to this. I mean, implementing a LSTM or a recurrent neural network of your favorite kind probably isn’t the same thing as consciousness, but who knows.

Sabine Hauert: I think one of the issues, at least through the discussions online, was why are we putting the focus on consciousness? It slightly distracts from the capabilities of AI. I think there was the hype factor associated with it as well. Is the discussion worth having now or is it a sidetrack in terms of what we should be focussing on in AI?

Tom: I don’t know. I mean any term, whether it’s “knowledge” or “consciousness” or “awareness,” that implies agency has always been problematic in AI. Alan Newell, in his first presidential address back in 1980 took up that discussion of what is knowledge? All of these things have a flavor of being attributed by an outside observer rather than being things you can point you in the code. I mean he was trying to argue for their physical reality. But, I didn’t answer your question!

Sabine: Any other thoughts on consciousness?

Michael Littman: Consciousness is a useful programming methodology where a system reflects on its own processing. If you’re trying to make decisions in the world, that’s actually a useful thing to do because your decision-making itself is relevant to the success of your decision-making. Are we seeing that in these algorithms at this point? I don’t know. I mean, certainly people are talking about this notion that everything is conscious to a degree. There’s not a bright line, like once you cross this line you’re conscious and before that line you’re not. So, trees are a little conscious, and notebook paper is a little conscious, and so, by that thinking, yeah, these networks are a little conscious too. But, can we say that they’re more or less conscious than a pencil?

Tom: Wouldn’t you need a decision to be happening? In your account, you’re saying that there’s a functional role for consciousness and it’s to inform decisions, presumably in real time or very close to real time, which is why you need this continuous awareness.

Michael: Which is to say, what? You would say that a pencil is not conscious because a pencil is not making decisions, and therefore the network is not conscious because the network’s not making decisions. It’s just a thing. You can push on it and something happens, just like a pencil. You can push on it and brilliant things could come out, though that never seems to happen when I push on a pencil, so it’s somehow a little bit in the eye of the beholder, or the hand of the user.

Stephen: A lot of the context for this in the Connectionists discussion was GPT-3 or GPT-X or GLAM or GLUM or whatever one is participating in these days. Certainly people like Geoff Hinton believe that GPT-3 is some kind of remarkable thing that’s happened. I kind of suggested it was a big phrase structure blob that swallowed Wikipedia and then if you poke it with the right similarity probe you get out something that looks similar and you go “oh, that’s neat”. But he quickly corrected me and he said “I think you’re wrong, I think it’s much more than that”. A lot of people are basically arguing that GPT-3 can pass certain kinds of consciousness tests, and that’s where this gets interesting. I think it is more than a pencil and I do believe that being aware of your decisions and verbalizing them is really, from a psychological point of view, the thing that underpins most theories of consciousness. Therefore, you have some explicit view of this. So, if you hit a really good tennis shot with your tennis racket, you can’t really verbalize that to people and you don’t have a conscious awareness of what you did. You just know how to do it. So there’s this sense in which you know how to do things and the sense in which you explain things. We do things all the time and sometimes in a very highly skilled way, and we can’t really say what we did and why we did it.

Holger Hoos: I have one point that I think might be relevant and that is: do we believe that animals have consciousness or do we just say that only humans have it? As soon as we ascribe even just a glimmer of consciousness to a dog, a cat, a horse, a dolphin, a blue whale, I’m afraid this whole line of reasoning about explanations and natural language completely goes out the window. One of the things that I do is play music, and so this is an area where you have a really hard time, even harder than in tennis, to explain why certain things work and others don’t, and to explain why the music has certain effects. Nevertheless, I can tell you my most conscious moments are when I am playing music. I’m sure you’ll find lots of people who would say something similar. So, I think this obsession with language is a very one-sided view of this whole phenomenon, and how wrong one can go when equating natural language abilities or natural language interaction with intelligence, consciousness, or anything that is related to that. You can see going back more than 40 years to the day and age of ELIZA. A lot of people who used that system ascribed some sort of consciousness and intentionality to it, and we know exactly what it is and we know that it’s much simpler than GPT-3 and it was meaningless to give it these properties. That is a lesson that we should not forget.

Stephen: I read a paper recently that said “my cat understands at least 25 words”. I think at least half of them have to do with food! The fact is, animals probably do have some consciousness. The question is not about the consciousness, it’s about the way memory systems work and the way you can extract information from them. My cat can’t talk to me and say “I really don’t like the cat food here, could you buy some different cat food please?” But, he expresses in other ways. One of my favorite examples is he gets up on his hind legs and he does this motion over the food, which is a burying motion after you have taken a bowel movement somewhere as a cat. That’s actually a pretty clear example of a cat communicating and he’s clearly conscious that I’m going to understand. So I agree with you, consciousness is a thin reed with regard to language. Again, we’re just back to the GPT thing where it’s swallowing Wikipedia and encyclopaedias or something and then it talks to you. A lot of people are getting excited about it having conscious awareness. There’s an example that’s been floating around which was if you say to it: “draw me a picture of a hamster with a red hat”, and then the thing draws a picture with a hamster with a red hat. There’s a whole group at Google trying to make this thing draw a hamster with a red hat, for example. Then there’s the reinforcement learning people who believe they’re doing artificial general intelligence which obviously is going to involve consciousness as well. There’s also a belief that reinforcement learning and making decisions is sufficient if you have some kind of duality of the memory system between something that’s skill based and something that is declarative. Personally I don’t think so.

Michael: I think Steve and Holger’s disagreement was maybe that it’s not language, it’s communication. And I don’t think that GPT-3 is communicating. But one thing that really shocked me is that I remember as an undergrad learning about the Chinese room argument, which basically says “you can’t make a machine conscious because it’s just pushing symbols around”, and I was like “that’s ridiculous, of course if you push the right symbols around in the right way, of course you’re conscious, that’s just silly”. But now, I find myself very much thinking about GPT-3 the opposite way. It’s like “no, of course it’s not consciousness, it’s just moving symbols around”. But I think the issue is specifically that it’s not trained to communicate, it’s just pushing symbols around to make predictions. It’s just amazing how it does it, but that doesn’t make it conscious.

Sarit Kraus: My question is: OK, why do I care if it is conscious or not? Will it improve the neural networks, their network capabilities or not? That’s what I think we should focus on.

Sabine: That view was expressed a lot online too, the sort of “who cares that the focus is consciousness”.

Tom: I think a lot of the issues here are about the history of philosophy. There is a sense that consciousness was a requirement for being a moral agent. Because you need to be aware of your actions and have done them deliberately and therefore you’re responsible for them. And that leads you down this rabbit hole of “well if the computer’s conscious then can we turn it off, or does it have rights?” and this is of course an argument that’s made for animals. Marvin Minsky would say it’s another suitcase word. It has a whole bunch of different things in there, and with Sarit I agree. The question is, which of those are functionally important for intelligent behavior? Which of them are important for being an agent? The thing about GPT-3, and this has been obvious in AI since I was a graduate student, is that you can build a knowledgeable system that is not intelligent. Wikipedia is a very knowledgeable system, if you’re willing to interpret it, and that’s true for GPT-3. What’s interesting about GPT-3 is that it has learned, evidently, extremely good representations internally, and it’s been combined with vision, and that’s the win, not really its language ability. Now people are looking at robotics and sound and so on. The result will be a representation that integrates language, vision, sound, and physical interaction. That should be very powerful, but it is not related to consciousness.



tags:


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Interview with Francesca Rossi – talking sustainable development goals, AI regulation, and AI ethics

“AI used to be a scientific and technical field, now it has become a socio-technical discipline"
28 March 2024, by

Datalike: Interview with Mariza Ferro

In their latest interview, Ndane and Isabella meet Mariza Ferro, professor at the Federal Fluminense University.

Interview with Amine Barrak: serverless computing and machine learning

PhD student, and AAAI Doctoral Consortium participant, Amine tells us about his research.
26 March 2024, by

AI UK 2024: Camden Council case study

How one of the London borough councils is using data and AI to help inform their decision making.
25 March 2024, by

How will generative artificial intelligence affect political advertising in 2024?

Illinois advertising professor Michelle Nelson talks about concerns around political advertising.
22 March 2024, by

Using machine learning to discover stiff and tough microstructures

Combining simulations and physical testing to forge materials with durability and flexibility for diverse engineering uses.
21 March 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association