ΑΙhub.org
 

AIhub coffee corner: The role of regulation in AI

by
15 May 2020



share this:
AIhub coffee corner

The AIhub coffee corner captures the musings of AI experts over a 30-minute conversation. In light of the recent EU whitepaper on AI and US proposed guidance for regulation, our experts discuss how far regulation should go.

To provide some background, the following text from the executive summary of the EU whitepaper gives a flavour of the European direction:

Artificial Intelligence is developing fast. It will change our lives by improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, improving the efficiency of production systems through predictive maintenance, increasing the security of Europeans, and in many other ways that we can only begin to imagine. At the same time, Artificial Intelligence (AI) entails a number of potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes.

Against a background of fierce global competition, a solid European approach is needed, building on the European strategy for AI presented in April 2018. To address the opportunities and challenges of AI, the EU must act as one and define its own way, based on European values, to promote the development and deployment of AI.

The Commission is committed to enabling scientific breakthrough, to preserving the EU’s technological leadership and to ensuring that new technologies are at the service of all Europeans – improving their lives while respecting their rights. Thus, the Commission supports a regulatory and investment oriented approach with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology.

European whitepaper: “On Artificial Intelligence – A European approach to excellence and trust”.

In terms of the US document, this text is from the introductory sections:

This draft Memorandum [...] provides guidance to all Federal agencies to inform the development of regulatory and nonregulatory approaches regarding technologies and industrial sectors that are empowered or enabled by artificial intelligence (AI) and consider ways to reduce barriers to the development and adoption of AI technologies.

When considering regulations or policies related to AI applications, agencies should continue to promote advancements in technology and innovation, while protecting American technology, economic and national security, privacy, civil liberties, and other American values, including the principles of freedom, human rights, the rule of law, and respect for intellectual property.

Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth. Where permitted by law, when deciding whether and how to regulate in an area that may affect AI applications, agencies should assess the effect of the potential regulation on AI innovation and growth.

US Memorandum: “Guidance for Regulation of Artificial Intelligence Applications”

Joining the discussion this week are: Sanmay Das (Washington University in St Louis), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Carles Sierra (CSIC) and Oskar van Stryk (Technische Universität Darmstadt).

Lucy Smith: Does anyone have any comments on the EU whitepaper?

Sabine Hauert: I noticed that a lot of these documents [AI documents relating the regulation] start with the definition of AI, and then we get stuck in a rabbit hole where, before we get to any content, we spend a lot of time trying to figure out what AI actually is. It’s interesting to see the flavour of the European whitepaper – the focus is on AI for society and building it up in a responsible way. I haven’t read the US one. I’d be interested to see if that’s also the case: are they also keen on debunking this idea of an “AI race” and moving towards a more sustainable version of AI?

Oskar van Stryk: I have discussed the EU whitepaper with one of my colleagues, although at this point I only have some preliminary comments as I haven’t had time to read the full whitepaper yet. We see a number of positive aspects to the document but also a number of risks. The general risk is that the people who are making the laws don’t understand the field and how it is evolving. For example, there is a danger of misleading regulation in the European Union because we have many small companies which have very diverse products and this is a strength but they cannot follow complex rules as large companies do. And too much regulation will threaten these small companies. If you are threatening the development of AI with so many rules in Europe then the US and China will get ahead and in the long-run the Europeans will have to purchase from these US and Chinese companies which are not following the rules but they are dominating the market.

Oskar: There was a detail mentioned where they wanted to regulate design principles somehow, but this is critical because we don’t know what the new design principles will be for future years because the field is so highly dynamic. And this is another issue which has probably not been understood. One of the most critical needs of the union in Europe is not the need for regulation, but the need for experts. We need to keep the experts in Europe and make it attractive for them here. The question is, to increase Europe’s position, how can we get more experts?

Carles Sierra: Adding to Oskar’s comments, one example of such a mis-match between our parliamentary representatives and technology is the case of GDPR and blockchain. So, some people say that GDPR and blockchain are completely at odds and they cannot work together. And this happens, among other things, because of the dynamicity. The GDPR regulation appeared before blockchain bloomed as a decentralised technology for databases. And now, the problem is that in blockchain the right to be forgotten is something that is totally impossible, or at least extremely difficult and goes against the principle of blockchain. So, the connection between members of parliament and people developing the technology should improve.

Carles: The second thing I wanted to say is that the European Union has an open call for experts. Over the next couple of years the European parliament will go in depth into the regulation of AI. They are looking for experts, they are desperate to have experts who will write reports and tell them what are the consequences or potential regulations. I think that some of the main experts in AI should enrol as experts for the European parliament, so that they are given the correct feedback. I think we have a role to play there. [Click here to find out how to sign up.]

Oskar: I think this is important. We cannot complain about experts not being involved if we are not involved ourselves!

Carles: Correct! I think this is something that many of us should do, in order to thoroughly inform the politicians.

Sabine: Anything on the US side? So far the comments we’ve heard are the regulation might stifle innovation, that it might be challenging in terms of the AI race (although people are pushing back on this “AI race” question). Do you think that’s true? On the other hand, there are big tech companies that are begging for regulation so that they have a framework in which they can operate.

Tom Dietterich: I think it’s important to regulate behaviours and risks rather than specific technology. We don’t want to get into a situation where we say, for example, “we’re never going to use neural nets because we can’t explain them”. First of all, what is neural nets will change and they will become more explainable. Also, we don’t know what the possible applications are and we might rule out some good ones. I think that’s the usual approach in torts or in liability law – to look at the behaviours rather than the technology itself. Skimming through the European document I didn’t see anything that looked too weird.

Tom: What we need is the appropriate level of confidence in the safety of the system. It could be that there are applications where the behaviour that is needed is precisely the explanatory behaviour in which case we need to support explanation. It should just be part of the specification of the system.

Sanmay Das: One thing that is important to regulate, and that most legal frameworks will take into account, is decision making. If an AI system is being used as part of decision making it should be subject to the same kinds of checks and balances that other decision-making systems are. You shouldn’t just be able to say “I’m denying you a loan because the AI system told me to deny you a loan”. That shouldn’t be sufficient explanation in those type of cases. I think that there is a fundamental difference between regulating technology and regulating what the technology is used for and used to do. This need to be taken into account. I think that there is increasing public awareness of this. One of the important things is to encourage more conversations between the people who understand the technology and the people who understand the legal frameworks within which these get applied. There’s been a bit of a disconnect between them. There are a lot of law review papers these days that are talking about AI bias and how we deal with regulating different kinds of bias decision making, in areas ranging from housing to employment discrimination. I should say that this is mostly within the US context that I’ve been reading about these issues. A lot of this can be addressed within the existing legal system but one just needs to think about the new issues that this raises and I’m not sure we’ve explicated that well enough at this point in time. This is a conversation that needs to continue and it’s important to involve expertise both from the technical side and from people who have expertise in law and regulation. I’m not convinced that that is actually happening.

Sabine: And this idea of an “AI race”, should we stop calling it that, or should we have our eyes open to the fact that there is a bit of a race going on?

Tom: I don’t know if race is the right term but there is certainly rapid technological development in military applications both in China and in the US. And we just saw Turkey launch a pretty large drone-based attack in Syria. I don’t know if they had a separate pilot for each drone but that was a new battlefield tactic that hadn’t been seen before. Of course the US has a fully autonomous ship that is already travelling around the world.

Tom: There’s the AI race depicted as AGI [artificial general intelligence] and I don’t see that happening anywhere except inside the companies that are the AGI companies. Whereas I see a race in industry and the military to apply the technology today successfully in their particular areas.

Lucy: Have you seen any specific examples of regulation?

Tom: The one thing that springs to mind is the ban on face recognition in various cities.

Sabine: I think the UK is also calling for that.

Tom: I think there’s a push for a moratorium to give us more time to think about this stuff. We probably need that time.

Some related articles concerning aspects of regulation

Airlines take no chances with our safety. And neither should artificial intelligence
ALLAI react to the EU whitepaper
Why Google thinks we need to regulate AI

Previous AIhub articles covering regulation

USA releases American AI initiative annual report
European Commission releases white paper on artificial intelligence
USA releases proposed guidance for regulation of AI applications
AI Policy Matters – facial recognition, deepfakes, AI regulation and policy
EU artificial intelligence ethics checklist ready for testing as new policy recommendations are published



tags:


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Are emergent abilities of large language models a mirage? – Interview with Brando Miranda

We hear about work that won a NeurIPS 2023 outstanding paper award.
25 April 2024, by

We built an AI tool to help set priorities for conservation in Madagascar: what we found

Daniele Silvestro has developed a tool that can help identify conservation and restoration priorities.
24 April 2024, by

Interview with Mike Lee: Communicating AI decision-making through demonstrations

We hear from AAAI/SIGAI Doctoral Consortium participant Mike Lee about his research on explainable AI.
23 April 2024, by

Machine learning viability modelling of vertical-axis wind turbines

Researchers have used a genetic learning algorithm to identify optimal pitch profiles for the turbine blades.
22 April 2024, by

The Machine Ethics podcast: What is AI? Volume 3

This is a bonus episode looking back over answers to our question: What is AI?
19 April 2024, by

DataLike: Interview with Tẹjúmádé Àfọ̀njá

"I place an emphasis on wellness and meticulously plan my schedule to ensure I can make meaningful contributions to what's important to me."




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association