ΑΙhub.org
 

AIhub coffee corner: AI risks, pause letters and the ensuing discourse

by
06 July 2023



share this:

AIhub coffee corner
The AIhub coffee corner captures the musings of AI experts over a short conversation. This month, in light of the recent prominent discussions relating to perceived AI risks, we consider the pause letters and risk statements, the debate around existential threats, and how this discourse could impact the field and public perceptions.

Joining the discussion this time are: Sanmay Das (George Mason University), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), Anna Tahovská (Czech Technical University), and Oskar von Stryk (Technische Universität Darmstadt).

Sabine Hauert: In today’s discussion we’re going to talk about potential AI risks and the recent discourse around existential threats. Does anyone have any hot reactions? How do you feel about the discourse of existential threat?

Tom Dietterich: I agree with Emily Bender and a lot of the critics that it’s a distraction and a diversion from thinking about the more immediate threats. Also, it’s very hard to do any assessment of existential threats. There’s climate change, synthetic biology, all these things are potential existential threats and which ones are more likely? I don’t know. From my collaboration with ecologists, I know that every species goes extinct with probability one, so it’s a matter of when, not if. But we should have anticipated that when we pass the Turing test, we immediately have a problem of deep fakes and all kinds of cybersecurity challenges. I’m just stunned that it didn’t occur to me until it happened. We need to think really hard as a community about what are the new threats that are being enabled by this new technology and the possible responses to those. However, I don’t think they’re existential, they’re more cybersecurity, and protecting our human brains and our decision making. It’s the social engineering attacks that seem to be the hardest to defend against. I think we can harden all of our APIs, that’s an ongoing struggle, but when it comes to protecting the human mind against these false attacks, that’s the challenge. I know someone who was the victim of a false kidnapping. People claimed they had her daughter, and she could talk to her daughter, but it was a synthetic daughter. It took a while to figure it out, about an hour I think, but what a terrifying hour.

Sabine: I think a lot of people in the community agree that these existential threats are perhaps far away, and a distraction, and yet the statement from the latest letter (“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”) was signed by a lot of the top people in the field. Are any of you really worried about this existential threat? I think we should also hear about that side.

Oskar von Stryk: I think the existential threat to humans is the human. AI is a special kind of technology which has new capabilities, it’s always been a question of how humans use it, how do they react to it, etc. How humans use the technology is, for me, the ultimate question. We cannot say that technology itself does something, it’s always a question of how we use it, how we interact with it, so this we shouldn’t forget. Therefore, we need to educate humans, I think that’s the key. Also, humans need to be willing to be responsible and to act sustainably. If most, or all humans did this, there would be no problem. That’s my, perhaps too naïve, point of view.

Sanmay Das: As you said Sabine, a lot of famous people have signed this statement and I’ve been watching the whole discourse with interest. I don’t know if you saw Kyunghyun Cho’s interview recently – I thought he was sensible in that interview. He expresses a lot of points, one of which is that, as a community of researchers, we’re always going to have a diversity of opinions, so that’s one thing we should keep in mind. The other thing to think about is the incentives behind the coalitions that are producing these letters, and how that could end up influencing the decision to either put out the letter or to be associated with the letter. I think that there’s value to a whole range of people in keeping this in the news and keeping people worried about the threat of existential risk, so that they can either keep regulation away, or keep interest alive with regards to their particular way of thinking. We need to think about the fact that there’s going to be some diversity of opinion among researchers – we should expect that to happen. Given that, it will be the case that some people think that certain things are existential risks. I guess personally, every time something like this happens it makes me rethink my priors. I’m very strongly in the camp that I don’t see it as an existential risk at the same scale as nuclear war, biotech, or climate change. However, if some people who I greatly respect come out and say things like that, it does make me question myself. I certainly couldn’t have anticipated, even seven or eight years ago, the speed with which language technology would evolve and it’s really very impressive. The kinds of things that one can do all the sudden are wildly cool. But the thing that I’m worried about is just how much the money and the incentives around this are distorting the public discourse.

Sabine: I worry about the public discourse profoundly. I feel like we’ve spent years trying to figure out how to explain AI and machine learning, trying to show the positive sides of it, how it can be used for good, and now to have that all fall in one go towards it being an existential threat completely skews not just the discourse for the public, but for those who enter the field. I don’t know many people who would want to enter a field that is branded as being an existential threat. And so, I really worry that all the efforts to bring in a broader community to the field are going to be hindered by this narrative. It’s been interesting recently because, although there has been a lot on the existential threat in the media, there have also been quite a few articles (in The BBC and The Guardian for example) urging us not to focus on the existential threat, and to think about understanding and acting on near-term things. I’ve seen a bit of a pushback, so it’s been good to see a bit of both sides of the story. However, I wonder how much damage has been done to the narrative.

Sarit Kraus: I agree with Oskar, it’s not the technology, it’s people. We need to realize that there is also a lot of good being done by these technologies and it has opened up many opportunities to people that they didn’t have earlier. Last week I got a phone call from someone who wanted to know what ChatGPT was capable of. It was quite funny because they thought that in one afternoon they could do magic. I was trying to explain what you need to know to use it, it was very interesting. In one sense, people are happy to use the technology for good things, on the other hand, people are worried about jobs being taken by the software. For example, English editing is one job which might not be needed anymore. I used to have an English editor go through my papers, but now I put them into ChatGPT and ask for help to improve my writing. However, you do need to be very careful because the software tends to add all sorts of ideas that I didn’t have in there to start with. Of course, people need to understand that it’s not factual, it’s not search, so don’t believe anything that ChatGPT tells you, you need to check everything. But that’s something people will learn over time. But I was wondering what other jobs might be taken. Maybe telephone operator, but I’m not sure about that. How about programming? Well, you can use ChatGPT to help you write programmes but it’s really far away from developing innovative algorithms, and it’s very bad at maths. One thing I learned about artificial intelligence, you can’t predict anything. So, we need to wait, and be cautious and try to use it for good causes.

Tom: Both of the statements that were signed by bunches of people were put forward by organizations whose primary source of funding is donors who are worried about existential threats. These were exactly like fundraising letters that I get from politicians. Certainly, there is one group of people (the Future of Life Institute) for whom this was very much in their interest in raising money. The second group of people seem to be from companies like OpenAI. I don’t know them personally so it’s hard to know their motives, but certainly many critics have said that, by focusing on these longer-term threats and possibly leading in regulations that raise barriers to entry for other companies, they would be securing their position as leaders in the field. So, they have an incentive to be doing that. And then you have the third group of people (and I would put Geoff Hinton, Yoshua Bengio, and many others, in this group) that I think are genuinely concerned. I feel like they might be being exploited by these other groups.

Sabine: I’m hearing a lot about regulation, and what we do as a result of this threat. For example, last week the UK Secretary of State for Science, Innovation and Technology was at ICRA to speak about robotics. She was interested in the question of regulation. I mentioned that it’s important to consider how we do it in a way that doesn’t just put up barriers but makes it flexible. So, I think we’re going to need to figure out how to not have a knee jerk reaction to the statements that are coming out and do things properly. We don’t want to lock newcomers out of the field, and we need to be asking questions that also consider the near-term issues, not just far-term potential threats, and making sure that the funding goes to the right place.

Oskar: The European AI act has moved forward to its final stage now. They have issued a call to various European institutions in standardization to provide some of the standard templates that the act currently doesn’t specify. I just wanted to add to what Sarit says about using AI for good. We have so many societal challenges and sustainability challenges in which AI and robotics can help. Just look at the aging society and the problems that we have in care. We need to look at how we can use these technologies to solve these challenges. Because these challenges are really coming and we can’t avoid them, and the other ones [the claimed existential risks] are just potential risks, which depend on how we use the technology. So, I really like to focus on how we can use the technology to solve the challenges we are definitely facing. Another point I’d like to make is that we have machine learning, and we still only have machine learning, we don’t have machine understanding yet. Some people are confusing what ChatGPT (and other similar tools) can do with machine understanding because the outputs sound plausible. However, these tools don’t have understanding, and a lot of people are overestimating their capabilities. Even people who know what is behind the technology’s limitations are sometimes stunned by a certain output. However, when you dive deeper, or ask a deeper question, all of a sudden you get a stupid output. Then the limitations of the technology are obvious. It’s the machine understanding which is missing, and we don’t know how this machine understanding will be achieved.

Sabine: And that lack of understanding in the model is also what makes it difficult to use currently in robotics and in embodied AI, and in the physical world in many cases.

Anna Tahovská: I just wanted to agree with Oscar because I think, in general, for people, the existential threat has always been there from new technologies, for example with the steam engine, electricity, it has always been there in the past, and now it’s AI. So, it’s important for the general public to learn about it and understand more about the research and how it can help.

Sabine: I think it’s great that people are asking Sarit how to use it, because it’s not just us researchers solving all these challenges, I think people in general need to figure out how to use these tools to use them to solve their own challenges.



tags:


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



AIhub coffee corner: Open vs closed science

The AIhub coffee corner captures the musings of AI experts over a short conversation.
26 April 2024, by

Are emergent abilities of large language models a mirage? – Interview with Brando Miranda

We hear about work that won a NeurIPS 2023 outstanding paper award.
25 April 2024, by

We built an AI tool to help set priorities for conservation in Madagascar: what we found

Daniele Silvestro has developed a tool that can help identify conservation and restoration priorities.
24 April 2024, by

Interview with Mike Lee: Communicating AI decision-making through demonstrations

We hear from AAAI/SIGAI Doctoral Consortium participant Mike Lee about his research on explainable AI.
23 April 2024, by

Machine learning viability modelling of vertical-axis wind turbines

Researchers have used a genetic learning algorithm to identify optimal pitch profiles for the turbine blades.
22 April 2024, by

The Machine Ethics podcast: What is AI? Volume 3

This is a bonus episode looking back over answers to our question: What is AI?
19 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association