ΑΙhub.org
 

What is AI? Stephen Hanson in conversation with Michael Jordan

by
07 September 2021



share this:
Michael Jordan and Steve Hanson

In the first instalment of this new video series, Stephen José Hanson talks to Michael I Jordan, about AI as an engineering discipline, what people call AI, so-called autonomous cars, and more.

To provide some background to this discussion, in 2018, Jordan published an essay on Medium entitled Artificial intelligence — the revolution hasn’t happened yet, in which he argues that we need to tone down the hype surrounding AI and develop the field as a human-centric engineering discipline. He adds further commentary on this topic in an interview published this year in IEEE spectrum, (Stop calling everything AI).

Hanson wrote a rebuttal to the Medium article, AI: Nope, the revolution is here and this time it is the real thing, and the pair discuss the theme in more detail in this video discussion below. There is also a full transcript of the discussion below.



This interchange was recorded on June 15th 2021.


Transcript

HANSON: Hi Michael, good to see you! So let’s get into this. Let me just state what I think you said and you tell me where I’m wrong, if I am. So it appears to me that you’re basically talking about that AI should arise from an engineering discipline that with start from well-defined science like chemistry and chemical engineering and this would allow the insights from the science to migrate their way into an engineering domain which had principles of design and control and risk management and many other good statistical quality control ideas that basically made AI the valuable and useful and have some utility and something actually went to calculate about the AI I actually being useful as opposed to the number hidden units it has….

JORDAN: Just to slow you down a little bit there, I mean historically I think the good points of reference or things like the development of chemical engineering or electrical engineering were that there was an existing science and understanding and there was an appetite to build real-world systems that have huge implications for human life. So chemical factories didn’t exist initially, but when they started to exist, I don’t think it was that the science was all worked out and they kind of applied it and it just happened. I mean, people tried to build things and sometimes they just learned by doing, and sometimes it worked and sometimes it failed. So I don’t think it’s “science vs engineering”, but I do think that good engineering at a large scale and scope is a good metaphor if not even stronger for what’s happening now. You know electrical engineering had the scope of wiring the world then bringing power everywhere, into cities and homes, and so on. And chemical engineering had the scope to have large scale entities to manufacture and distribute products. I’d like to see the counterpoint to the argument that we’re building some type of human intelligence to replace a human with this punctate object the AI (computer) and that kind of “intelligence” is the goal. I think a better starting place is that that we’re building planetary-scale kind factory like systems with flows, values and decisions and we make those good, and how to make them successful or happy, you know, safe and useful and so on so forth so I believe that to be a better starting place for what we’re doing in our field. And if you wanted at the end of the day to say that some overall network is intelligent or adaptive or whatever that’s fine with me. Basically, the goal is not to define intelligence, but make intelligence work in the real-world.

HANSON: Right, Ok, Ok, so that was where I was going, I think, though, the contrast with what is actually happening, that is, what people are now calling “AI”. I mean unfortunately, I think you’re very courageous and maybe someone who is somewhat invulnerable from attacks, but obviously you’re the minority position in a giant academic and corporate ecology, you know you can just name all of our best corporations Google, Facebook etc, there and even the little ones like Target are trying to figure out how to do deep learning to do something whether it has to do with as you saw a kind of pattern recognition – actually I think I would put this in a slightly a larger scope, which might called “mapping” – mapping this to that and, of course, with that type of napping there may be some kind of encoding and decoding and so, basically there are a lot of processes involved and just saying that, we need to acknowledge that we have no control over the designs or the architectures or what’s happening in general, it is simply ad hoc and I think you would agree that what is happening in Google, now. For example I remember going to the one of the most recent in-person NeurIPS conferences, I think it was 2017 and I walked up to a poster that the Google brain people were presenting and they were basically using bags of words and they were doing translation and the translation rates were phenomenal! It was really amazing. So asked them: “how are you dealing with syntax, do you have a framework for that?” And one of them said “we don’t know much about that”. And I said “what, you don’t know about nouns and verbs and adverbs? Are you working with a linguist?” “Not really” they replied. Ok, so this is of course very frightening, right – because now it’s essentially ignorance and you know just ignoring the past is now part of the modeling process!

JORDAN: Well, Steve, I think this won’t be very successful in the real-world.

HANSON: But it has! People use it in all sorts of situations rather successfully.

JORDAN I personally think that the companies we are naming here are all using the same technology but frankly, I am enamoured with the companies with the business models that try to create new markets. So, you know, maybe not Target but certainly Amazon, Alibaba and others, in that they’re doing things that affect people and they move things around the world. And I think a place like Amazon in terms of innovation far outstrips Google and Facebook, being a little controversial here. I mean Google did an amazing thing in the 90s with the creation of a search box. It was way better than anybody else, and it changed humanity.

HANSON: Fourier analysis…

JORDAN: Right, So they did that, and I don’t think it’s gotten a whole lot better. Frankly, with all that technology, a lot of times they just point to Wikipedia. And Wikipedia is an example of technology that is at least as good as anyone has ever done. What innovations did they do after that? It’s hard for me to point to much. They built an advertising model – in order to get paid – and they created Android – with, in my view, limited innovations, they did MS office better in a distributed way.

HANSON: Yes I use that all the time…

JORDAN: I could keep going on, but I just don’t see these companies using AI except for the buzzword, and “show-off” sort of stuff, doing much for humanity. For innovations in the real-world, when I look at Amazon, look they’re innovating, they did clouds, they did Alexa.

HANSON: Well, I spend most of my time yelling at Alexa.

JORDAN: Yes but they brought the platform into homes, whether it works well or not, it’s in the home. The innovation is they’re bringing real-world things in the world and providing real value to real people. I think that it is critical they are bringing networks between humans on the left and humans on the right, producers and consumers, this is what changes things.

HANSON: Right, but you are talking about micro-economics, and that these companies are creating new markets and I get that.

JORDAN: Right right, economics, economics.

HANSON: Supply chains, What is a market? AI is part of this of course, and again these companies, I agree, are creating new markets.

JORDAN: Think about UBER, it wasn’t thought out all that well, but they had to invent as they went along, and they did innovations, and they did a lot of AI, mapping – right. They don’t do the advertising and don’t get the credit.

HANSON: So let’s stop there for a second, so tell me a little bit about why or why not you think DeepMind is doing useful things?

JORDAN: Well I’m not aware of the scope of all the things DeepMind is doing.

HANSON: Well we can just focus on their Reinforcement Learning and their games.

JORDAN: Ok, reinforcement learning, first these algorithms existed, and what they did was use them on massive hardware and really did massive search in game space. And so I think of these as a parlour trick. Beautiful math underneath it. But it’s just a parlour trick, and I think many of them are aware of this. It might be indicative of the fact that we will be able to do cool things. But to argue from that we can discover the secrets of intelligence and that we can solve all other hard problems, cure for cancer, etc.

HANSON: Well, well, wait a minute. They recently showed protein folding as an application.

JORDAN: Well protein folding is a good example, but that is the best example that will live on as a real contribution to human knowledge.

HANSON: Not like just playing an Atari game with raw pixel input.

JORDAN: Well, AlphaGo was impressive and had the Korean Champion pretty devastated by the game losses, by an intelligence from the West, big game tree search stuff, but at the same time it’s not like this was a brand new intelligence as it is still talked about today. But folding is a very hard problem and is great stuff, but you know they didn’t really solve the entire folding problem, you know,…

HANSON, Of course, that is correct, this was an improvement on a benchmark in that community, but just a benchmark. They’re good at benchmarks.

JORDAN: But, that’s good science, and if they did stuff like that I would be happy.

HANSON: But calling it AI, still annoys you?

JORDAN: YES that’s not AI! It’s not. It’s optimization and biological knowledge brought together in a good engineering way.

HANSON: So lets, lets, ah.

JORDAN: I read a phrase at some point in some blog, that the DeepMind CEO said “let’s solve intelligence and then it will solve other problems.” And that is just so wrong on so many levels. Imagine if chemical engineers thought this way: we’ll wait until we solve intelligence and then let it build chemical factories. No this doesn’t make sense, and maybe some type of deep folding is a good example, that is, DeepMind could say we will solve protein folding, and then contribute in important ways to the biological sciences.

HANSON: Our friend Tom Dietterich, is using machine learning and even kinds of deep learning models as well doing pharmaceutical research, drug discovery: see this again is an interesting mapping problem. But, let’s go back to the heart of what your argument is here about, what AI is and what it isn’t, and for me this takes us back to the 70s and you remember the seventies? Like I’m sure you do…

JORDAN: Hazily…

HANSON: when PDP was in an early phase, Dave Rumelhart and others and yourself, of course, we’re trying to figure out various kinds of paradigms to use, but there was this tension that you brought up in your article between let’s say McCarthy and Wiener, cybernetics and whatever AI was but there’s something else our friend Terry Sejnowski worries a lot about that whether or not there is brain science that could inform or inspire certain kinds AI and tend to be more biological, and of course he sees the 1970s and 80s with with Geoff Hinton as a kind of, you know, a reification of what was happening in our trying to understand something about (1) simple parallelism (2) associationism (3) locality and you know you go down to list of things and those algorithms are very distinct from one else was happening at that time. So two of my colleagues here (Rutgers) who are, one who is no longer with us, Jerry Fodor and Zenon Pylyshyn spent a lot of time trashing PDP and connectionist kind of views partly because they thought that the brain was as a metaphor was like quantum mechanics or something, and this was absurd, but they were sure that the brain was too complex and we never going to understand it, what we need to understand are the feelings, the thoughts, intentions, the reasoning and beliefs and the stuff that makes the mind.. and the mind of course, from the psychologist point of view the mind is separate from the brain (sort of channelling Descarte), but if you take the constraints it come from your intentions, beliefs, etc. and the various sorts of behaviors you observe we can infer those states of mind from, then what we’re looking at is something that are potential constraints on what brain function should look like or what brain structure should look like. They wrote a book on that and Zenon, in particular, was quite argumentative about this and of course he was pushing Expert Systems in the seventies, which were a spectacular failure, which was further cumulated by Adm. Bobby Inman, who had argued for a federal response to what the Japanese were calling the fifth generation of AI at the time. Remember he created this MCC thing in Austin Texas, it was like a hundred million dollars put into this partly for machine translation, partly for logic systems, and partly for Doug Lenat, you remember him?, who was building Knowledge systems.

JORDAN: Oh yes of course I recall Lenat.

HANSON: Well, I was one of the people that had gone down there too because Bell Labs had paid a million dollars. All these corporations could pay MCC million dollars and then they would be able send four people and I was one of the lucky people who went down there to listen to Doug explain to us how one day there would be a Cloud, he got that right and you be able to plug into it and artificial intelligence come out of it and it would just transferring whatever you were doing in to AI and it would all be happening by 2005.

JORDAN: Yes, this is just science fiction of course, and I am already familiar with the PDP etc. story.

HANSON: Sure sure, but let me make one point here: this tension between mind and brain has been around for a long time, and I think someone might confuse your critiques of the deep learning as part of the tension, but I don’t think it is, I don’t think it has any connection to this and it seems to me you’re coming from an orthogonal view that doesn’t have any to do with this old debate about mind and brain and all of that.

JORDAN: I think you’re right. That’s right.

HANSON: And I think you would probably also agree that you know you could study the brain as a neuroscientist and think about, for instance, the various skills, problem solving (implicit, explicit) that you had made and certainly you would agree that you could look at the brain and make such distinctions.

JORDAN: Yes, I learned that history too but I’ve moved away from that as well and, of course, it’s perfectly fine to work on psychological or neuroscientific theory. But I guess I would rather focus on someone like Doug Englebart, the computer scientist in that era, who tried to create artefacts that augmented human abilities and human intelligence, artefacts that again would be a boon to humanity. These people like Englebart weren’t working on AI per se, or defining intelligence, so this was sort of push-back against Minksy and McCarthy trying to put AI or human thought in a computer. PDP and brain and so on was more of a side show, what was really important in the 70s and 80s was the creation of these artefacts and devices, like the search engine and these things that eventually changed the world and augmented human capabilities on many levels.. And there was Wiener’s cybernetics which I think is similar to the IIS view I have. And so there is this third thing like Englebart, hey we got Computing let’s make it bigger, safer, better and I think that is still going on. And if we think it through we should try and build large scale systems, like transportation and financial systems that makes everyone life better. This also opens up you to a larger view of learning, learning is not just what the brain does, for example, economies learn and although they are not brain science or not psychological science, economies have their own forces and dynamics and structure and could be adaptive and that can last for decades or centuries.

HANSON: Sure sure. But that’s a very complex kind of solution…

JORDAN: When I moved away from neuroscience and psychology, I didn’t look back and say that was crap. It’s just that there is a larger picture that should be brought together with these other views. And that is more Cybernetics and Englebart. But the part about deep-learning and the brain, is that, you know I was next door to David Rumelhart who did develop backpropagation in time. But after looking at control systems, I could see he reinvented the chain rule and also invented co-state equations. And although it was invented in control theory in the 60s, they had no reference or interest in the brain. This was just linear algebra and linearized systems to make better control systems. Specifically Dave was sort of embarrassed at some point, in that he had to have signals going backwards, and of course the brain doesn’t do that. But he wanted to continue working on it, because it has so much promise. Lo and behold, 30-40 years later, the associative memory didn’t pan out and the big deal was backpropagation and gradient descent and that had nothing to do with brain science.

HANSON: Well, so I knew Dave Rumelhart pretty well too, and he and I would discuss a lot of the theoretical aspects of backprop and other algorithms. And he was certainly not focused on control theory and optimization. But rather how do I transmit information in a parallel system and do some kind of mapping between inputs and outputs.

JORDAN: No, no, Steve, I agree, of course Dave was a psychologist, neuroscientist and that was his focus. But what I am saying is that Back-prop was his big achievement to this day.

HANSON: Right, but like you say there is way to invent the wheel, and the wheel gets used by lots of people; but the point is that wherever the control theorists were coming from, Dave was coming from a totally different direction; the fact that there was some convergence on a method that seems to be very useful and lots of contexts and now due to Geoff and other people have been scaled up in a ridiculous way to be phenomenal and I know you’d agree that it is phenomenal. I think you’d also agree that you have the same concerns I have, like what the hell are they doing and it doesn’t make any sense. They have more parameters than anything. the regularization of the thing sucks and how does it work?! And they don’t know, and nobody knows.

JORDAN: It bothers me less then I think it bothers folks like you. Consider if you took nearest neighbour and fed it lots of data, it would get a lot of rough statistics and patterns that we wouldn’t understand. And whatever it did, we wouldn’t understand that either. And maybe it should be studied or not, it’s an artefact at some level but it finds features it does in an interesting map and so on, but of more interest is how do we put systems like this in society where they interact with humans where there are strategic issues and how do we understand their economic value? And then the semantic stuff you were talking about, how do we get these systems to actually start to reify the meaning in the world rather than just mimic Humans in different contexts, which is really all they are doing. And focusing on how these layered systems do, is of no interest to me, but I am glad people are working on it.

HANSON: Yes, I understand, and, of course, I am not trying to convince you to work on this, trust me. But what I am trying to get at is what the overall goals of AI are and what it actually is right now. We can’t deny that whatever is happening with deep learning and the architectures is some important thing.

JORDAN: Yup.

HANSON, And I think there are a lot of emergent properties that come out of these systems, which we cannot predict from covariance matrices or a bunch of probability distributions, there is something else going on. I’m not saying its magic! I’m not saying it can’t be reduced to a bunch of differential equations or probability distributions, of course it can, but there is something very biological in the way they bootstrap themselves. There is something very biological about this. I am unfortunately a brain scientist now, and I have two scanners and look a 10s of thousands of brains a year. And the brain structure is still kinda mysterious.

JORDAN: Kinda!?

HANSON: Yes, kinda, I mean we have some good idea of what the back half of the brain is doing in terms of neural transmission and function, but the front half, like prefrontal cortex, is basically spaghetti!

JORDAN: so what has been figured out those backward connections to the visual cortex?

HANSON: Yes, well that’s right, the feedback is coming from parietal and pre-frontal areas and other areas, so it looks like a recurrent network tracing back and forth, that is alternating in sequence almost like some sort of refinement system. Something that is taking raw visual crap that the retina has torn into bits, and then ends up in V1 in columns that tell you which eye (ocular dominance)the information came from and projects to little swirly angles. How does that become a picture of a face? That should really be a question, how do you get from 6 cortical layers to anything intelligent? Here’s a real mismatch between DL to brain, there are six layers not six million, but SIX!

JORDAN: And the wires underneath those layers are amazingly densely connected.

HANSON: That’s right, that’s right, and many folks are trying to map this like the genome, the connectome, but their maps are so complex, it’s hard to see how there will be meaningful circuit structure found.

JORDAN: Just to state as clearly as I can, in a format like this, I believe this is all wonderful, and a major part of the era. I think it might take decades or centuries to get that Newtonian kind of understanding and it will change the world, but there are other things in our era equally important.

HANSON: But we are in a phase transition with regard to what AI is. So I think there are two things that can happen here. So one thing can be that deep-minders, terribly smart people, are working on these kinds of things, and they come up with some simple principles that allows us to control this and it reduces down to something we are all familiar with.

JORDAN: You went a little quick there: I would love it if some mathematical equation was found that one could put in a box and start working. But I need something more concrete here, you think if the mathematical equations are found, you would put them in a box and you have AI?

HANSON: No, no, I don’t think so, I think what we’re looking at is a large dynamical system that is over parameterized and somehow regularizes itself and then behaves in predictable ways.

JORDAN: But now we are looking forward, not backward, it’s not the search for some magical equation that can be AI.

HANSON: Right, so Doug Lenat had a line that he used often in these kinds of contexts: “I am looking for basically 70 lines of code, which once found will be AI”.

JORDAN: Ok, right. No no.

HANSON: So the hyperbole doesn’t stop at the beach. It goes all the way back.

JORDAN: Yeah, yeah that’s right.

HANSON: And we never did find the 70 lines code, and we aren’t going to find the 70 trillion weight thing that all of sudden becomes intelligent, we know that’s going to slide off the cliff at some point, now does it hit the ground and break apart again, so we essentially will have a third backlash, essentially against neural networks. So really against the idea of doing the unbridled thoughtless: let’s do a learning rule and change some weights and “see what happens” kind of research, or will there be some mathematicians that will somehow rationalize what is happening?

JORDAN: Hmm.

HANSON: Now I have a small footnote here. I’ve been going to the Institute for Advanced Study (IAS, Princeton), and they have decided to have a new focus on deep learning, in particular the mathematical rationalization of deep learning. In fact, they hired an Italian fellow, a mathematician, you may know him. He had written some papers on deep learning and put a group together, mostly postdocs. So they started hosting workshops/receptions/speakers on the mathematics of deep learning, and I went to some of them. The first one had a number of excellent mathematicians who would say, I don’t see how this works but here’s gradient descent and how it works, and some of the problems. The first five hours basically could have been a PDP meeting from the 80s! Well they ended up doing this for a few years, and then it seemed to collapse and wasn’t going anywhere. But maybe there are people working on this still.

JORDAN: So you’re right I don’t think about this that much, for example, I think about variational inequalities, and Chebyshev equilibrium and gradient descent/ascent, where multiple agents can find regimes where they compete or cooperate. I don’t care if that is what a brain might do, or what a collection of people might do, or whatever, I care that it is a way to organize a system so good things happen in the system. So the fact there are connections between gradient based dynamics, and dynamical systems and optimization and sampling in the context of multiagents and stable equilibria is super exciting and a lot of the mathematics comes in there, and if there is an AI winter, that stuff is not going away, it is real. Andi it’s part of our era and to work on things like that.

HANSON: So so, lets…

JORDAN: Again I am not opposed to AI, as an aspiration and even though I lost the terminology battle, and the PR will pick that up, but I do find it weird that Uri Nestorov whose done miraculous work in optimization and Brad Efron has done miraculous work on regularization, They are called AI!? Are they going to suffer an AI winter. I don’t think so.

HANSON: So as we know, ignorance is bliss.

JORDAN: So I don’t have any problems about AI per se, but a word I don’t like is “AUTONOMY”.

HANSON: Oh nice! Ok why not?

JORDAN: Intelligence we can argue about, no one really knows, I think there is an economic model for it and you may think it is to study babies, or monkeys, that’s all good. Learning maybe a bit harder to pin down, but easier than intelligence. But “autonomy” comes out of the mouths of too many people. That we are supposed to build autonomous systems, So it’s like “look Ma no hands”, I was so smart I was able to build a system that can do everything itself, and that is intelligent, and that leads to the idea that only a few people to work on you can’t have the whole world, has to be a few people it can’t be too many people. Like a few people at DeepMind, or whatever. The problem is you don’t want a bunch of autonomous systems running around. We want things that communicate and cooperate. I think in one article I talked about the air traffic control system, if we built a bunch of autonomous planes that decided what they wanted to do and hopefully they would interact well, but it would be totally wrong and probably lead to chaos.

HANSON: So let me challenge you a bit on this suppose we thought of the traffic system as a market and we had so-called autonomous cars.

JORDAN: Not autonomous!

HANSON: I said so-called autonomous.

JORDAN: Ok, so-called autonomous.

HANSON: People talk that way, I bet we could bring Elon Musk here and he would say that. You could then have a marketplace where cars negotiate and do simple packet passing kind of stuff. “You don’t hit me and I won’t hit you”. Which is impossible now, as far as I can tell the more Millennials and Gen-z are on the road the more dangerous it is for Boomers like me driving as they are coming too fast and they have no risk maintenance at all. Hey you could die in this thing!

JORDAN: The “autonomous” car people understand that if you got rid of all the humans and you build a perfectly good system tomorrow.

HANSON: That’s right, so I guess they wouldn’t be autonomous, as they would be in this network communicating that is a big grid that is basically trying to (1) increase flow, (2) decrease accidents and (3) make some revenue for whoever owns the grid/traffic system. So, there is a sense that the micro-economics would solve the traffic/car problems. But the one thing that prevents this is that young boys want cars, maybe young girls, but certainly young boys, And it’s a powerful thing and a sense of entitlement and identity and has all these bad things that social media has.

JORDAN: But it is a real world phenomena where social media is not, as you can copy as many bits as you want, which is why Trump’s voice could be amplified, but a car has a limited radius of danger.

HANSON: That’s right

JORDAN: But look, I don’t believe that micro-economics can solve everything. The framework is that there is scarcity, I want to go down the street and you want to go down the street and there is not enough room for both of us. So scarcity is the issue and how do we deal with it. And economics is the field we try and study that scarcity. And how do you deal with it, well it involves utility and context, so since it depends and that brings us to the market. And that utility is not absolute and it depends, as soon as we say it depends, then you’re back in the world of a market, so based on needs, we have pricing and economic tools to try and decide what we do next given the other people’s actions. And so if you think there will be bad actors, well there are bad actors in all kinds of markets and their actions are tamped down by the market, You realize the bad actor can only have a limited effect. And we can get the market to incentivize the bad actors to do the right thing or be eliminated somehow. If we don’t start using this type of language we will be stuck talking about autonomous systems without any way to deal with them.

HANSON: I agree, but where there is economics there are politics, capitalism, communism and socialism. There’s a floor and a ceiling.

JORDAN: Let’s be careful here, where there’s religion, theatre there is politics. I think economics, because it’s powerful and tends to get connected to money, politics etc.

HANSON: No, no, no, I wasn’t going there.

JORDAN: Yes you weren’t going there but people will.

HANSON: Sure, but that’s not my point and that’s frankly why it’s bad to randomly cross politics with other systems. But here’s the thing in a capitalist system there is a floor and a ceiling. One person can be a billionaire and someone else can be homeless. In that perfect market these bounds are natural in a capitalist system and the question is both in Economics and AI, I think, is should they be modulated? And if so, how?

JORDAN: So let’s be clear, a market is not one kind of thing. Lets reset our mind here a bit. So you could have a kidney matching market, I need a kidney and they are out there, and you could have regulations and setpoints, and you should.

HANSON: So right I get it this is not a philosophical thing, and you can introduce good engineering principles into the market control.

JORDAN: Right.

HANSON: So let me change topics again for the last few minutes. Our old friend Rich Sutton just published a paper in the AI Journal, which again seems to be in conflict with your views on AI. The title was , ah.. “REWARD IS ENOUGH FOR GENERAL AI”. Seems argumentative, but this is the old Sutton/Barto Agent/Actor system with policy search and limited planning and goal setting and then setting up value search through a RNN, so it has memory and predict ahead capabilities etc. And it does seem to make sense and is plausible as a system for autonomous robot exist, that could negotiate with us, say 20-30 years from now, the semi-autonomous robot can bring us a drink or find and bring my walker, so I can move around etc., basically doing the the kind of care-giver support that a nurse or third party would have to be paid to provide. So the boomer society would basically give over their control to these autonomous systems that we need to negotiate with and create this new market in your terms.

JORDAN: Well, there are already these external devices that exist now, like google, I can’t remember your name. I can’t remember the capital of Afghanistan and so this search engine extends my memory, very much like the Engelbart and Wiener tradition.

HANSON: Engelbart invented the “mouse” right?

JORDAN: Right.

HANSON: So that is exactly what you have been talking about, and if you notice that children tend to interact with computers and other things by pointing and touching from five years old on, the mouse is a perfect and natural extension of something we already do.

JORDAN: Right right, if you look at Amazon, were they smart. Yes but they started with business plan and work back from there, And Engelbart started with what the human needs are, I start with the human need, rather than an autonomous robot that may or might solve problems, and things might emerge. I’m just not comfortable with autonomous robots that might go “haywire” and do who knows what.

HANSON: And Don Norman, was one of the early proponents of this “human-centered design”.

JORDAN: Yes he was.

HANSON: And he proposed that devices, things needed to adapt to us, not the other way around.

JORDAN: Yes, that’s right.

HANSON: And he had a lot of great examples in his book. I think its insightful, but one of things that is a concern, I remember that Gerry Tesauro who was working his Backgammon system. I said, Gerry where did you get all the labelled examples? Well, he said, it’s just playing against me. So wait, are you just an excellent Backgammon player?, and he said, “yes, I am pretty good!”. So I said: but what happens when a human who is better than you comes along and can beat you and therefore you Backgammon neural network? Then doesn’t neural network gammon become sort of boring, right? So he went back to IBM, and, not due to me per se, but came up with a GAN kind of system in which he had two networks play each other for a month or so (the 80s), and he wrote to me one day and said “it’s beating me now!”. And I don’t know how’s it’s doing it! And I think this is an important insight about emergence which I want to save in this discussion. And certainly there are strategic aspects of Backgammon as a game and the emergence here is special. I just don’t believe we can assume that getting the 3 brightest Nobel Prize winners or three brightest Turing award winners together, and say you guys figure this stuff out.

JORDAN: No, I agree, sounds like a bad idea.

HANSON: Right.

JORDAN: Another favorite example of mine that is related to this involves high-jumping. When I was a kid I would do high-jumping and I got pretty good although I wasn’t tall kid, but you would run up to the line and barrel-role, like everyone else, and 1000s of people would do this and then finally, in the wisdom of crowds, this guy Fosbury (Fosbury “flop”) simply went over backwards, and realized this was a improvement over the normal way to do the high jump. So in the wisdom of crowds there is probably the wisdom of creativity in crowds.

HANSON: So I think this is a good place to stop because I think you and I generally agree on everything at this point! Ha! Probably not! But again Michael, thanks for chatting. We should do this again.

JORDAN: Right.. Great talking. I enjoyed it.

Michael I. Jordan is Pehong Chen Distinguished Professor, Department of EECS, Department of Statistics, AMP Lab, Berkeley AI Research Lab, University of California, Berkeley. He has worked for over three decades in the computational, inferential, cognitive and biological sciences, first as a graduate student at UCSD and then as a faculty member at MIT and Berkeley. One of his recent roles is as a Faculty Partner and Co-Founder at AI@The House — a venture fund and accelerator in Berkeley. This fund aims to support not only AI activities, but also IA and II activities, and to do so in the context of a university environment that includes not only the engineering disciplines, but also the perspectives of the social sciences, the cognitive sciences and the humanities.

Stephen José Hanson is Full Professor of Psychology at Rutgers University and Director of Rutgers Brain Imaging Center (RUBIC) and an executive member of the Rutgers Cognitive Science Center. He has been Department Chair of Psychology at Rutgers and the Department Head of Learning Systems Department at SIEMENS Corporate Research and a research scientist in the Cognitive Science Laboratory at Princeton University. He has held positions at AT&T Bell Laboratories, BELLCORE (AI and Information Sciences Department), SIEMENS Research, Indiana University and Princeton University. He has studied and published on learning in humans, animals and machines. He was General Chair (1992) for Neural Information Processing Conference and elected to the NIPS foundation board in 1993 where he is still on the Advisory Board, he was also a founding member of the McDonnell-Pew Cognitive Neuroscience Advisory Board which for over a decade helped launch the fields of Cognitive Neuroscience and Computational Neuroimaging.


Watch other interviews in this series:
What is AI? Stephen Hanson in conversation with Richard Sutton



tags:


Stephen José Hanson is Full Professor of Psychology at Rutgers University and Director of Rutgers Brain Imaging Center (RUBIC)
Stephen José Hanson is Full Professor of Psychology at Rutgers University and Director of Rutgers Brain Imaging Center (RUBIC)




            AIhub is supported by:


Related posts :



The Machine Ethics podcast: What is AI? Volume 3

This is a bonus episode looking back over answers to our question: What is AI?
19 April 2024, by

DataLike: Interview with Tẹjúmádé Àfọ̀njá

"I place an emphasis on wellness and meticulously plan my schedule to ensure I can make meaningful contributions to what's important to me."

Beyond the mud: Datasets, benchmarks, and methods for computer vision in off-road racing

Off-road motorcycle racing poses unique challenges that push the boundaries of what existing computer vision systems can handle
17 April 2024, by

Interview with Bálint Gyevnár: Creating explanations for AI-based decision-making systems

PhD student and AAAI/SIGAI Doctoral Consortium participant tells us about his research.
16 April 2024, by

2024 AI Index report published

Read the latest edition of the AI Index Report which tracks and visualises data related to AI.
15 April 2024, by

#AAAI2024 workshops round-up 4: eXplainable AI approaches for deep reinforcement learning, and responsible language models

We hear from the organisers of two workshops at AAAI2024 and find out the key takeaways from their events.
12 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association