ΑΙhub.org
 

AIhub coffee corner: Regulation of AI


by
01 November 2023



share this:
AIhub coffee corner

The AIhub coffee corner captures the musings of AI experts over a short conversation. Three years ago, our trustees sat down to discuss AI and regulation. A lot has happened since then, both on the technological development front and on the policy front, so we thought it was time to tackle the topic again. [Note: This conversation took place before the announcement of the USA Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. You can read more about that here.]

Joining the conversation this time are: Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), Michael Littman (Brown University), and Carles Sierra (CSIC).

Sabine Hauert: Regulation of AI was a very hot topic a few months ago, and interest has definitely not died down. So, do we need to regulate AI?

Carles Sierra: In Europe there is a shared thinking that AI should be regulated. This work has been in progress for about four years now, when there was a high-level expert group that made a number of recommendations about what kind of applications should be prohibited or controlled by governments. That made its way into the AI Act, which was approved in June. In my view, yes, there are certain applications of AI that should be regulated, and governments cannot escape taking responsibility. Major corporations would probably like that not to happen. However, I think the protection of our social values, respect for our privacy, and for our principles of justice, is important enough to make the applications safe. I think the intervention of public administrations is important, and we need regulation to guarantee these values for society.

Sabine: Do you think the AI Act is still fit for purpose given the developments with large language models (LLMs)?

Carles: The problem is that to fully put a law in place takes five to six years. Of course, technology evolves very quickly, so it is very difficult to keep up. The AI Act was in the final readings when ChatGPT (and other such models) exploded, and so there were some add-ons to the law at the last minute. There are things that are still valid. For instance, in education, if we want an AI assistant to assign marks to a student, or to assign students to schools, that should be very clearly monitored and scrutinized by the administration. I think that identifying these sectors, and the possible risks of applications, is a good approach. We will probably need to add certain LLM-specific elements into the law, but generally these types of developments have already been foreseen – the list of high-risk applications can grow, that’s the way the law was designed.

Sabine: Yes, at least it’s a strong starting point. What would you say is the role of scientists in the regulation pipeline?

Michael Littman: I think it’s really critical. In my role with NSF [National Science Foundation], we want to be in touch with researchers as policies are being made to really understand if they make any sense, and if the rules actually solve the problem that people are trying to solve. This is the thing that I’ve seen a lot, where policymakers block a particular technology because they’re trying to block a particular use. It’s possible that you can actually block the use without blocking the technology along with the positive attributes of that technology. There are really clever ideas, in terms of cryptography, and so forth, that can be used to protect privacy that are different from saying “there can be no collection of this kind of data or no handling of this kind of data”. It’s great when the scientists are in the room providing those kinds of inputs.

Sarit Kraus: It’s very challenging to make regulations in AI. It’s very difficult to enforce regulation even in software, but, with AI, even more so. Machine learning, in particular, is a black box, so you don’t really know what data was used. You can observe some results, but you can’t observe the results of all cases. I think that, even if we can develop regulations, it’s not clear to me how we will enforce them.

Carles: One of the things that has come into law in Europe is the creation of supervision agencies, so every country will need to create an agency to monitor the applications of AI. In Spain, such an agency has already been created – it’s getting ready to start its operations at the beginning of next year. The companies will need to go through scrutiny at those agencies about the applications that they will put on the market.

Sabine: So, they’ll be licensed or certified to be able to do that.

Carles: Yes, this is how things will work with the AI Act.

Sarit: AI is a technology; it’s software. My question is, are there any sub-areas of computer science where regulation is enforced? Why do we need more regulation specifically for AI?

Carles: Well, we’ve had the GDPR (general data privacy regulation). That regulates any software to protect the privacy of the user. So that is a precedent. However, it’s true, there are not many.

Sarit: And was it helpful?

Carles: Well, we can be cynical about that, because now many applications in Europe pop up with warnings about the data usage, and basically what people do is they accept anything. But, in a sense, there are some regulations. There is accountability whereby companies have to provide information if they are asked about their procedures. They need to be transparent. I guess from a legal perspective it’s probably better – citizens have a mechanism to make complaints.

Sabine: I worry about licensing and certification. How are the small companies going to do this? It is a barrier to a lot of small companies being able to enter the market, unless it’s done really well with a straightforward licensing process. That’s something that I could see big players doing really well at, figuring out how to license their technology or find the loopholes.

Sarit: I think for me it’s more problematic that a lot of the powerful AI is done in big companies. It would be better if they weren’t so dominant – that would have more impact than regulation.

Carles: In the last writing of the regulation, I think they made some exceptions for open-source and open-science approaches, so that in the law open source is a bit more privileged. I haven’t gone into the details, but I agree. The big companies said, “we don’t need any regulation, we will regulate ourselves”. They proposed creating a committee in which they promised to make sure to apply ethical principles in their applications. Many people saw that as playing the game – they don’t want the US government to regulate so they can continue with business as usual.

Sabine: How can they possibly regulate themselves? I don’t understand how that’s even an option.

Carles: A year back there was a Bill of Rights that was being discussed in US Congress. I don’t know what happened with that.

Michael: To be clear, that was the Office of Science and Technology Policy, which is kind of the science wing of the White House. That’s a different branch of the US government than Congress. They ended up calling it the “Blueprint for an AI Bill of Rights” because it wasn’t really granting any rights. It’s still getting talked about a lot. The other thing that’s getting talked about is the Risk Management Framework put out by the National Institute for Standards and Technology (NIST). It is basically an outline of what you ought to be thinking about when you deploy AI systems in the real world, and how you should be monitoring and modifying them over time. Neither of these has been made into any kind of law of the land, although there are people pushing for that. For now, they are more like guidelines. There’s also a bunch of voluntary commitments that the White House got some of the tech companies to make and that’s becoming a template for international discussions around AI. So, there could be a multi-country set of guidelines that comes out around that. It’s all being very actively talked about.

Sabine: Because of the fast-moving nature of it, there’s been discussion of sandboxes. For robots, for example, it would be useful to have a safe physical place where you would try out robots almost without regulation to learn what regulation you would need to be able to deploy them. A suggestion is to have fast-moving regulation where on a case-by-case basis you approve things, you learn from them, you move on, and you’re able to iterate the regulation rather than have these laws that take many years to create. You have agility in the system. I don’t know of sandbox regulations that exist in other areas.

Carles: Thinking about the direction of regulation, normally you regulate verticals. You regulate logistics, you regulate health, for example. Regulating a technology is not very common, and, as Sarit was saying, there’s no generic regulation of software. But, we will have one for a particular kind of software [AI].

Michael: I find it useful to think of an analogy between AI and metal. They are both useful for lots of things, some of which are really useful and some of which are dangerous. Metal, for example, can be made into a weapon, but it can also be made into a medical stretcher. In spite of the fact that metal behaves similarly in all these different contexts, we don’t legislate metal horizontally.

Carles: It’s very challenging. I think in the European Parliament they have a mixture of both things, the horizontal regulation for the technology, and then also the verticals. In health, what is high risk? In education, what is high risk? Because, just regulating horizontally doesn’t make much sense. The part of the verticals, what to regulate in health, or education, or transport, that is a bit clearer for companies. So, if a company builds something to assign marks to a student, they know that their product will be scrutinized.

Sabine: That’s interesting. So, all the start-ups that jumped on the AI bandwagon to raise funds might now jump off it to avoid regulation, potentially, and call themselves something slightly different.

Sarit: This is a very interesting question. I mean, if someone builds software to mark grades on tests, but it is not AI, does anybody regulate it? As you said, when we are moving from software that does some tasks to AI software, it’s very difficult to define.

Carles: It is. My guess is that when companies produce software for applications that are considered high risk by the AI Act, they will have to ask for certification from the national agencies. However, we will see what happens once the agencies are created, we will see what companies start doing.

Sabine: It does sound like a positive thing, right? We want to create these AI systems in a responsible way and having some form of regulation makes a lot of sense. As part of this effort there’s also been questions asked as to who we are trying to protect with these regulations, and if there has been enough public buy-in. Is it to protect people, is it to protect business interests, is it to protect the governments and democracy, to avoid fake news?

Carles: Well, in principle it is to protect people and democracy. European parliaments have what they call a technical assessment department, which is a group of individuals that produce reports for parliamentarians that are as neutral as possible. Most of the politicians have no idea about technology, they are not engineers or scientists. Anyway, I was at a meeting with the European Parliament and the topics discussed were AI and democracy (the parliamentarians were worried about how generative AI is potentially damaging democracy), health, education, and work. There are some fears among the political class about the massive social impact that AI could have, especially generative AI.

Sabine: That sounds grounded though, it’s good to hear. It sounds grounded in the sectors, it sounds like they are considering the real applications that are near-term.

Carles: The Danish parliament has had such a technology assessment department for 40 years. It’s quite amazing. The Germans have had one for about 30 years. It was interesting to listen to the politicians and scientists discuss these topics.

Sabine: In many EU projects now there’s really a concerted effort to think about how work they’re doing relates to the AI Act. I think that’s probably going to become part of the training of any AI practitioner. There’s much more of a push for responsible innovation across the board.

Carles: We have new degrees on AI popping up all over Europe. There are courses on AI ethics too, and I think in the future there will also be courses specific to AI and Law. If you’re going to become an AI engineer, you need to know the legal implications of what you do. I think this is part of the training of an engineer.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



New AI tool generates realistic satellite images of future flooding

  24 Dec 2024
The method could help communities visualize and prepare for approaching storms.

2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association