ΑΙhub.org
 

Researching EU regulation around AI

by
23 May 2022



share this:
book and glasses

Developments in the field of artificial intelligence (AI) are moving quickly. The EU is working hard to establish rules around AI and to determine which systems are welcome and which are not. But how does the EU do this when the biggest players, the US and China, often have different ethical views? Political economist Daniel Mügge and his team will conduct research into how the EU conducts its ‘AI diplomacy’ and will sketch potential future scenarios.

“Our research is essentially about regulation around AI”, says political economist Daniel Mügge. “About how the EU approaches this issue with the major players globally and potential future scenarios.” We talked to Mügge about his new project, for which he has been awarded an NWO Vici grant.

AI is not contained within walls and borders

“Regulation around AI is necessary due to concerns over privacy and autonomy, potential discrimination of individuals and unwanted military applications, for example. But how rules can be established, and how effective they will be, depends on developments in the rest of the world. Because, clearly, AI is a technology that is not limited by borders. It’s in social media and digital services, it’s not contained within walls and borders. China and the US, the biggest players in the field of AI, have a major impact on the systems that enter the European market. In our research we analyse how the EU approaches these major players when establishing rules.”

Discussions around Al are under way with the US, agreements with China are difficult

“There are currently a lot of discussions underway with the US over the establishment of ethical frameworks around AI. These frameworks are designed to prevent discrimination, for example, and to limit the power of large companies like Google and Facebook. How successful this will be, however, is currently open to debate.

“And making agreements with China is even harder. The country is often seen in this regard as an example of how not to do things: a government that, through AI, has obtained too much power over its citizens and uses it to oppress people. At the same time, however, these agreements with China are necessary. Not only does China have a large AI sector and lots of money, it also has access to a huge amount of data. And that data is essential for learning computer systems. Imagine you are offered a promising AI system from China that detects cancer at an early stage. Before you use this system in a hospital, however, you want to know whether the data that feeds the detection algorithm has been collected ethically.

“You see different attitudes to AI regulation between EU countries too, for that matter. Traditionally the Germans are extremely cautious around privacy issues and the sharing of personal data and, as a result, are in favour of more stringent regulations. But a country like Estonia, for example, sees opportunities to develop itself as a forward-looking digital lab and benefits more from somewhat more relaxed stringent regulations.”

daniel-muggeDaniel Mügge

Pressing ahead with a European AI sector

“Besides this regulation of systems coming into the EU, Brussels is trying to establish a strong AI sector in Europe. Not only are European companies easier to regulate, but this European collaboration is essential if we are to integrate the huge amounts of data that are required to develop competing AI systems. Brussels is trying to press ahead with this because developments in technology are moving fast. This won’t be easy because policy-making at European level is not a quick process.”

Companies are both lobbyists and experts

“It is a challenge for Brussels to define clearly in legal terms what is and what is not covered by AI regulation. The big tech companies will play a huge role in this. Clearly they are lobbying against restrictive regulation. But, at the same time, Brussels badly needs their technical knowledge if it is to come up with effective regulations. Otherwise you could end up with regulations that are out of kilter with the technical reality and therefore don’t really help control the issues in the field of AI. In such way these companies will help decide which systems are welcome on the European market and which are not – a dynamic that is clearly open to question.”

Outlining future scenarios

“In our research we will start by mapping what’s happening at the moment: to what extent is the EU dependent on the actions of others for effective regulation? Based on that, we will outline future scenarios, giving the advantages and disadvantages of specific choices. We take the view that you mustn’t formulate one approach for all AI systems, or one set of rules that the EU always uses. Depending on the specific AI application, it may be advisable, for example, to sometimes follow the American approach, to make global agreements in another area, and in the case of other problems, to develop our own EU vision and approach.”

The research will start in September 2022 and will run for 5 years.




University of Amsterdam




            AIhub is supported by:


Related posts :



ChatGPT is changing the way we write. Here’s how – and why it’s a problem

Have you noticed certain words and phrases popping up everywhere lately?
07 October 2024, by

Will humans accept robots that can lie? Scientists find it depends on the lie

Humans don’t just lie to deceive: sometimes we lie to avoid hurting others, breaking one social norm to uphold another.
04 October 2024, by

Explainable AI for detecting and monitoring infrastructure defects

A team of researchers has demonstrated the feasibility of an AI-driven method for crack detection, growth and monitoring.
03 October 2024, by

The Good Robot podcast: the EU AI Act part 2, with Amba Kak and Sarah Myers West from AI NOW

In the second instalment of their EU AI Act series, Eleanor and Kerry talk to Amba Kak and Sarah Myers West
02 October 2024, by

Forthcoming machine learning and AI seminars: October 2024 edition

A list of free-to-attend AI-related seminars that are scheduled to take place between 1 October and 30 November 2024.
01 October 2024, by

Linguistic bias in ChatGPT: Language models reinforce dialect discrimination

Examining how ChatGPT’s behavior changes in response to text in different varieties of English.
30 September 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association