ΑΙhub.org
 

AIhub coffee corner: Can we avoid another “AI winter”?

by
13 March 2020



share this:

The AIhub coffee corner captures the musings of AI experts over a 30-minute conversation. This edition focusses on the state of the AI research landscape amid claims from some quarters that we are on the cusp of another “AI winter”. Our seven experts discuss the historical context to these claims, their feelings about the position of AI research now, and their expectations for the near future.

Debates on the impact of AI and its potential for deployment across a wide variety of applications have been taking place for many years. These two articles, presenting counter arguments, helped frame our coffee corner discussion:
Artificial Intelligence — The Revolution Hasn’t Happened Yet, by Michael Jordan.
AI: Nope, the revolution is here and this time it is the real thing, by Steve Hanson.

Involved in the discussion for this edition are: Tom Dietterich (Oregon State University), Steve Hanson (Rutgers University), Sabine Hauert (University of Bristol), Michael Littman (Brown University), Michela Milano (University of Bologna), Carles Sierra (CSIC) and Oskar van Stryk (Technische Universität Darmstadt).

Steve Hanson: I think that the two pieces [see above for links] frame the tension in the field. They might help people to think about what is hype and what isn’t hype, and what actually is happening. The historical context is important here. People’s memories typically go back two to five years, which I think is a problem.

Oskar von Stryk: Just commenting on the five years, even recently I’ve heard of PhD students working in AI who have been asking why they should look back more than five years.

Oskar: Talking about an “AI winter” I think there are three possible interpretations of this:
1) It could be cooling down of overheated expectations,
2) It could be funding going down,
3) Or, it could be lack of progress.

Steve: I think it’s usually defined as number two – when funding dries up in all directions. Right now, if you look at DeepMind and deep learning in general then there are many different directions it’s going – healthcare, military, it’s just everywhere. So that means, just by chance, some of this is going to fail. That’s going to create very bad branding and will lead to other kinds of things looking suspect. But, once government funding starts to dry up – that’s a serious bio-marker.

Sabine Hauert: You were arguing that we’re not approaching an AI winter weren’t you?

Steve: Yes, I’m the optimist. And Geoff Hinton, obviously, is an optimist. Although he doesn’t like just throwing many millions of hidden layers together to do something. There is still very little theory underlying all this deep learning – it’s very difficult to understand why something works.

Sabine: So, why is it different from the 1980s?

Steve: In the 80s the issue was that there wasn’t enough data, the neural networks were ridiculously small and the problems were more like hobbies. Often serious problems weren’t being solved.

Tom Dietterich: I agree. I’ve had interesting conversations with a colleague in industry. He was saying that inside companies, the big risk is that the first system they build is one where the companies have a long experience. He used the analogy of Google that when they applied deep learning in the speech recognition context, the engineers already had 20 years’ of experience with speech signals and they could just drop deep learning into an existing software ecosystem and it just killed the problem. Whereas, then the company management asks them to go and work on (e.g.) self-driving cars for which they have no experience of the problem and no software infrastructure, so it’s going to take a lot longer to make a useful contribution. He was worried about corporate funding cycles and whether they could stick in there long enough to get some success.

Steve: And, in the meantime, accidents might happen with that kind of technology [self-driving cars].

Oskar: My impression is that many people think that deep learning is the hammer to solve all problems.

Tom: I would turn it around and say we have a new hammer, we don’t know what it’s good at so we’ll try it on everything. That’s the positive version.

Oskar: I think that we can already see some of the limitations, it’s not going to solve all problems. Many people think that AI is just deep learning and neural networks, but of course it’s much more. I think that many different approaches will gain much more attention in the coming years. Combining approaches will be essential if we are to crack problems such as self-driving cars.

Tom: An area that I think is very interesting is the interface between machine learning and software engineering. Because, obviously, machine learning tools are just one component of larger systems in the real-world but they pose new software engineering challenges. There was an interesting recent episode of TWIML (This Week In Machine Learning) where they interviewed Jordan Edwards, one of the engineers who works in machine learning operations at Microsoft (Microsoft Cloud operations). He was talking about all the issues that arise when you deploy a machine learning system at scale on real problems. How do you keep it working as the world changes and so on? Google has published some interesting work in that area too. I think that most of the innovation in that space is happening in industry, naturally. Universities don’t have the expertise or the capacity to research in that area but they should be teaching about it.

Sabine: Is that because it doesn’t feel like a research challenge, it’s more an integration challenge?

Tom: Well, yes and no. I mean, it relates to all the problems of data set shifts and maintaining safety guarantees as surprises happen. So, I think there is a technical side of it too, but like a lot of software engineering questions it’s much more amorphous and much harder to pull out academic puzzles to solve.

Michela Milano: The European position is very enthusiastic. Of course, we are all aware of the hype which is good on one side (because of the funding increases) but on the other side this leads to unrealistic expectations. The European position is to try to have AI that covers many domains while respecting European principles and ethical guidelines. This is basically what the European Commission is trying to fund – AI systems that are in some way human centred. I think this is a good direction.

Carles Sierra: Adding to that I think that one of the big issues we’ll have in Europe during the next year or two will be the start of AI regulation by the European parliament. They are looking into what can be regulated and how it can be regulated. I think that there is a big role for AI experts here in terms of advising. I think we need to keep an eye on any new laws and new regulations and what they mean. In Europe this could be the big thing for the next few years. It would be very interesting to see how regulation is taken into account in different countries. For example, what is happening in China will be totally different to what is happening in Europe or in the United States.

Michael Littman: An AI Winter would be a concern. I had been more worried, I’m a little less worried right now. The fact that the AI technology is really in use on a large scale I think protects against the worst possible outcomes of an AI winter.

Sabine: So, you are all quite positive? I guess we are a biased crowd.

Carles: I guess you only regulate things that are really happening. You don’t regulate things that are irrelevant. In Europe there is an acceptance that AI is here and that it needs to be regulated, and so there is no expectation of a winter.

Steve: One likely place where things will really start happening is in the healthcare field. The MDs will push back against this dramatically but in terms of detecting cancers and other kinds of things there seems to be good evidence that deep learning is a good candidate here. But, the question is how would that be accepted. There is natural regulation in the resistance to technology uptake. There is already a built-in resistance to things that are novel and complex and not well understood until people start to accept it.

Sabine: So, could the AI winter be caused by, not what we’d assume which would be the technological advances not being there yet, but could be driven by us: by the regulation, by the resistance, by other factors.

Steve: I doubt that. I think Tom’s right about the corporate world. I’ve lived in the corporate world for many decades in the past, and that’s where you find huge shifts. There are people that have to be responsive to outcomes much quicker than in academia. And, if that funding shifts or there’s an overhyped device that gets into the marketplace fails in a dramatic way then this will cause a backlash. It’s one thing beating the best Go players in the world, it’s another to change our financial or healthcare systems. I mean, these are dramatic changes. Therefore, systems really have to work if they get into the mainstream.


Further related resources:
  • This TWIML podcast, an interview with David Ferrucci, Founder, CEO, and Chief Scientist at Elemental Cognition Are We Being Honest About How Difficult AI Really Is?, gives another viewpoint on AI and its challenges.
  • This recent article by Luciano Floridi discusses the cyclical nature of the field and lessons that can be learnt from previous AI winters.
  • Henry Kautz gave a very interesting talk entitled “The Third AI Summer” at AAAI-20. You can watch it here.


tags:


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Are emergent abilities of large language models a mirage? – Interview with Brando Miranda

We hear about work that won a NeurIPS 2023 outstanding paper award.
25 April 2024, by

We built an AI tool to help set priorities for conservation in Madagascar: what we found

Daniele Silvestro has developed a tool that can help identify conservation and restoration priorities.
24 April 2024, by

Interview with Mike Lee: Communicating AI decision-making through demonstrations

We hear from AAAI/SIGAI Doctoral Consortium participant Mike Lee about his research on explainable AI.
23 April 2024, by

Machine learning viability modelling of vertical-axis wind turbines

Researchers have used a genetic learning algorithm to identify optimal pitch profiles for the turbine blades.
22 April 2024, by

The Machine Ethics podcast: What is AI? Volume 3

This is a bonus episode looking back over answers to our question: What is AI?
19 April 2024, by

DataLike: Interview with Tẹjúmádé Àfọ̀njá

"I place an emphasis on wellness and meticulously plan my schedule to ensure I can make meaningful contributions to what's important to me."




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association