AIhub coffee corner: AI thanksgiving

25 November 2021

share this:
AIhub coffee corner

The AIhub coffee corner captures the musings of AI experts over a 30-minute conversation. This month, we take a look at all the things we are thankful for in the AI community.

Joining the discussion this time are: Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Holger Hoos (Leiden University), Sarit Kraus (Bar-Ilan University), Michael Littman (Brown University) and Carles Sierra (Artificial Intelligence Research Institute of the Spanish National Research Council).

Holger Hoos: I think one can be really grateful that progress in AI has come at a point where we really need it. I think we’ve maneuvered ourselves as humankind into a situation where the limitations of our own natural intelligence make it very likely that we’re going to drive ourselves against the wall. Issues such as climate change are simply too complex for us to figure out, even if you bring lots of smart people together and give them lots of resources. There’s too much evolutionary baggage that makes us pretty good at certain things and pretty bad at others. And, unfortunately, the things that we’re pretty bad at are things that are coming to get us now. So, this is about the best time we could imagine for some enhancements, in the form of AI methods and tools, to become available. Although there are still lots of risks, problems and challenges, I think we’re going to have to realise, and I think we will 50-100 years from now, that without those enhancements to our natural intelligence we would’ve been in very serious trouble.

Sabine Hauert: I’m thankful for activists. I personally don’t feel like an activist, except for the need for science communication in this area! But, there is a big push to make AI good for society and I think the decision by Facebook to not do facial recognition is an example that the activists’ work is important and I’m grateful that they do that work.

Tom Dietterich: I would call out the openness of the community, and our commitment to open publication – the free journals have been a huge thing, I think. And, generally, the AI organisations that I’ve been involved with have all been eager to promote our young researchers and to not have the organisations controlled by people my age. They really try to empower the next generation. Of course, there’s always more we can do there, but I think that’s part of our culture, and it’s a really healthy part.

Also, I’m very grateful to Facebook AI Research for PyTorch and to Google for TensorFlow and Keras and all the work that they’ve done there. Again, the open source ethos of our community has been really powerful in making it possible for people worldwide to experiment with these technologies. There are obviously downsides as well, but in general it’s been really empowering for students. Now, I talk to high-school students who’ve built neural networks and trained them and it’s amazing.

Sarit Kraus: In the past, it wasn’t a great honour to say you were an AI researcher. People would ask what you did and say “Oh AI – it doesn’t work”. Now, it’s the other way around. Today, I talked to a government representative about my research and he said “well, do you do AI?”. I said “of course”. But he said “you didn’t say that, you talked about optimisation problems and prediction problems”. It was very funny that he thought that AI is now a prestigious subject to be associated with. We should be grateful for that.

On the other hand, I tell my students that when AI stories are on the front pages of the mainstream newspapers, I get worried, because that means that people are overreacting. It’s worrying that the expectation is too high. On one hand that’s good because it attracts a lot of researchers, and there is so much excellent research. On the other hand, the worry is that the expectations will not be satisfied and people will be disappointed. So, we need to find a balance there. But, on the whole it’s great that everybody would like to do AI research.

Michael Littman: In the past, through the history of the field of AI, people have been really inspired by the idea, and then when things start to look really good, AI gets applied to problems in the real world. To date, anytime there was an encounter between AI and the real world, the real world won, and AI was injured. We had this sequence of AI winters when the field had to retreat back into the lab. As Sarit pointed out, it wasn’t a happy place to be, even though a lot of us were still really interested in the kinds of problems that the field was studying. This time, AI has come in contact with reality and AI is surviving that. Unfortunately, reality is getting bumped around a little bit, and bruised, and that’s unfortunate and we need to do something about that. I think if AI can get through this, reality is going to get through it, too. Both are going to get stronger as a result of them coming together. I’m excited about that. We’re not there yet, but I think we’re on an encouraging trajectory.

Carles Sierra: I’m grateful to two groups of people. One is politicians, which is not a very common thing to say! Nowadays, everybody is supporting AI because it’s very easy to support AI. However, as Michael said, there have been winters in our field and during those winters there was still funding support and our research organisations still supported researchers in the field, even though it was diminished. That support made possible a continuum of research since the 60s without massive disruption. We should be grateful for that support in difficult times.

The other thing we should be grateful for are our scientific fathers and mothers, in the sense that they created a collaborative culture. If you look at other fields of research, they’re much more subject to competition and fights and there isn’t a nice feeling of belonging to a community. I think that in AI we have a much more interesting, inclusive, collaborative culture. This is due to our predecessors in the field that were very open minded in accepting all sorts of ideas, and they made the field exciting and inclusive, encouraging everyone to belong to it.

Holger: Just to add to Carles’ last point, I think that’s a really interesting one. I too cherish this inclusive nature that we see. I feel that inclusivity is increasing rather than decreasing, although there is more to do there. I do wonder whether the founders of the field came to this more inclusive and friendlier nature of the field by going through pretty hard times themselves? At the very beginning, things didn’t always look so friendly; quite a few of the forefathers and foremothers of AI had to live through intense frustration and were in fact sometimes marginalised. This happened until late in the 90s. We see now this resurgence of neural networks. We should not forget that this is an area that had been marginalised by mainstream AI for decades and probably to the detriment of the field as a whole, as much as to the detriment of the people who stuck with it and brought it forward. These days, it’s good that we’ve become more inclusive of all sorts of diversity, including, as Sarit pointed out, a growing awareness that AI is not just one thing – this one hype that we are all running after. It is a very diverse field in terms of the participants and also in terms of the topics and methodology that are brought to the table. We can all sit here together and say, “you do more optimisation, and I do more machine learning, and you do more reasoning and I do more knowledge representation”, but we are all doing AI. That is something we should be grateful for, because it makes the community much richer and also makes it more likely that we’re going to succeed in solving pressing real-world problems using AI.

Sabine: As a final thanks, we’re grateful to everyone who has supported and contributed to AIhub and, of course, to all of our readers!


AIhub Editor is dedicated to free high-quality information about AI.
AIhub Editor is dedicated to free high-quality information about AI.

            AIhub is supported by:

Related posts :

Keeping learning-based control safe by regulating distributional shift

We propose a new framework to reason about the safety of a learning-based controller with respect to its training distribution.
30 September 2022, by

Bipedal robot achieves Guinness World Record in 100 metres

Cassie the robot, developed at Oregon State University, records the fastest 100 metres by a bipedal robot.
29 September 2022, by

#IJCAI2022 distinguished paper – Plurality veto: A simple voting rule achieving optimal metric distortion

How can we create a voting system that best represents the preferences of the voters?
28 September 2022, by

AIhub monthly digest: September 2022 – environmental conservation, retrosynthesis, and RoboCup

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
27 September 2022, by

The Machine Ethics Podcast: Rights, trust and ethical choice with Ricardo Baeza-Yates

Host Ben Byford chats to Ricardo Baeza-Yates about responsible AI, the importance of AI governance, questioning people's intent to create AGI, and more.
26 September 2022, by

Recurrent model-free RL can be a strong baseline for many POMDPs

Considering an approach for dealing with realistic problems with noise and incomplete information.
23 September 2022, by

©2021 - Association for the Understanding of Artificial Intelligence


©2021 - ROBOTS Association