ΑΙhub.org
 

AIhub coffee corner: AI thanksgiving


by
25 November 2021



share this:
AIhub coffee corner

The AIhub coffee corner captures the musings of AI experts over a 30-minute conversation. This month, we take a look at all the things we are thankful for in the AI community.

Joining the discussion this time are: Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Holger Hoos (Leiden University), Sarit Kraus (Bar-Ilan University), Michael Littman (Brown University) and Carles Sierra (Artificial Intelligence Research Institute of the Spanish National Research Council).

Holger Hoos: I think one can be really grateful that progress in AI has come at a point where we really need it. I think we’ve maneuvered ourselves as humankind into a situation where the limitations of our own natural intelligence make it very likely that we’re going to drive ourselves against the wall. Issues such as climate change are simply too complex for us to figure out, even if you bring lots of smart people together and give them lots of resources. There’s too much evolutionary baggage that makes us pretty good at certain things and pretty bad at others. And, unfortunately, the things that we’re pretty bad at are things that are coming to get us now. So, this is about the best time we could imagine for some enhancements, in the form of AI methods and tools, to become available. Although there are still lots of risks, problems and challenges, I think we’re going to have to realise, and I think we will 50-100 years from now, that without those enhancements to our natural intelligence we would’ve been in very serious trouble.

Sabine Hauert: I’m thankful for activists. I personally don’t feel like an activist, except for the need for science communication in this area! But, there is a big push to make AI good for society and I think the decision by Facebook to not do facial recognition is an example that the activists’ work is important and I’m grateful that they do that work.

Tom Dietterich: I would call out the openness of the community, and our commitment to open publication – the free journals have been a huge thing, I think. And, generally, the AI organisations that I’ve been involved with have all been eager to promote our young researchers and to not have the organisations controlled by people my age. They really try to empower the next generation. Of course, there’s always more we can do there, but I think that’s part of our culture, and it’s a really healthy part.

Also, I’m very grateful to Facebook AI Research for PyTorch and to Google for TensorFlow and Keras and all the work that they’ve done there. Again, the open source ethos of our community has been really powerful in making it possible for people worldwide to experiment with these technologies. There are obviously downsides as well, but in general it’s been really empowering for students. Now, I talk to high-school students who’ve built neural networks and trained them and it’s amazing.

Sarit Kraus: In the past, it wasn’t a great honour to say you were an AI researcher. People would ask what you did and say “Oh AI – it doesn’t work”. Now, it’s the other way around. Today, I talked to a government representative about my research and he said “well, do you do AI?”. I said “of course”. But he said “you didn’t say that, you talked about optimisation problems and prediction problems”. It was very funny that he thought that AI is now a prestigious subject to be associated with. We should be grateful for that.

On the other hand, I tell my students that when AI stories are on the front pages of the mainstream newspapers, I get worried, because that means that people are overreacting. It’s worrying that the expectation is too high. On one hand that’s good because it attracts a lot of researchers, and there is so much excellent research. On the other hand, the worry is that the expectations will not be satisfied and people will be disappointed. So, we need to find a balance there. But, on the whole it’s great that everybody would like to do AI research.

Michael Littman: In the past, through the history of the field of AI, people have been really inspired by the idea, and then when things start to look really good, AI gets applied to problems in the real world. To date, anytime there was an encounter between AI and the real world, the real world won, and AI was injured. We had this sequence of AI winters when the field had to retreat back into the lab. As Sarit pointed out, it wasn’t a happy place to be, even though a lot of us were still really interested in the kinds of problems that the field was studying. This time, AI has come in contact with reality and AI is surviving that. Unfortunately, reality is getting bumped around a little bit, and bruised, and that’s unfortunate and we need to do something about that. I think if AI can get through this, reality is going to get through it, too. Both are going to get stronger as a result of them coming together. I’m excited about that. We’re not there yet, but I think we’re on an encouraging trajectory.

Carles Sierra: I’m grateful to two groups of people. One is politicians, which is not a very common thing to say! Nowadays, everybody is supporting AI because it’s very easy to support AI. However, as Michael said, there have been winters in our field and during those winters there was still funding support and our research organisations still supported researchers in the field, even though it was diminished. That support made possible a continuum of research since the 60s without massive disruption. We should be grateful for that support in difficult times.

The other thing we should be grateful for are our scientific fathers and mothers, in the sense that they created a collaborative culture. If you look at other fields of research, they’re much more subject to competition and fights and there isn’t a nice feeling of belonging to a community. I think that in AI we have a much more interesting, inclusive, collaborative culture. This is due to our predecessors in the field that were very open minded in accepting all sorts of ideas, and they made the field exciting and inclusive, encouraging everyone to belong to it.

Holger: Just to add to Carles’ last point, I think that’s a really interesting one. I too cherish this inclusive nature that we see. I feel that inclusivity is increasing rather than decreasing, although there is more to do there. I do wonder whether the founders of the field came to this more inclusive and friendlier nature of the field by going through pretty hard times themselves? At the very beginning, things didn’t always look so friendly; quite a few of the forefathers and foremothers of AI had to live through intense frustration and were in fact sometimes marginalised. This happened until late in the 90s. We see now this resurgence of neural networks. We should not forget that this is an area that had been marginalised by mainstream AI for decades and probably to the detriment of the field as a whole, as much as to the detriment of the people who stuck with it and brought it forward. These days, it’s good that we’ve become more inclusive of all sorts of diversity, including, as Sarit pointed out, a growing awareness that AI is not just one thing – this one hype that we are all running after. It is a very diverse field in terms of the participants and also in terms of the topics and methodology that are brought to the table. We can all sit here together and say, “you do more optimisation, and I do more machine learning, and you do more reasoning and I do more knowledge representation”, but we are all doing AI. That is something we should be grateful for, because it makes the community much richer and also makes it more likely that we’re going to succeed in solving pressing real-world problems using AI.

Sabine: As a final thanks, we’re grateful to everyone who has supported and contributed to AIhub and, of course, to all of our readers!



tags:


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association