ΑΙhub.org
 

Researching EU regulation around AI


by
23 May 2022



share this:
book and glasses

Developments in the field of artificial intelligence (AI) are moving quickly. The EU is working hard to establish rules around AI and to determine which systems are welcome and which are not. But how does the EU do this when the biggest players, the US and China, often have different ethical views? Political economist Daniel Mügge and his team will conduct research into how the EU conducts its ‘AI diplomacy’ and will sketch potential future scenarios.

“Our research is essentially about regulation around AI”, says political economist Daniel Mügge. “About how the EU approaches this issue with the major players globally and potential future scenarios.” We talked to Mügge about his new project, for which he has been awarded an NWO Vici grant.

AI is not contained within walls and borders

“Regulation around AI is necessary due to concerns over privacy and autonomy, potential discrimination of individuals and unwanted military applications, for example. But how rules can be established, and how effective they will be, depends on developments in the rest of the world. Because, clearly, AI is a technology that is not limited by borders. It’s in social media and digital services, it’s not contained within walls and borders. China and the US, the biggest players in the field of AI, have a major impact on the systems that enter the European market. In our research we analyse how the EU approaches these major players when establishing rules.”

Discussions around Al are under way with the US, agreements with China are difficult

“There are currently a lot of discussions underway with the US over the establishment of ethical frameworks around AI. These frameworks are designed to prevent discrimination, for example, and to limit the power of large companies like Google and Facebook. How successful this will be, however, is currently open to debate.

“And making agreements with China is even harder. The country is often seen in this regard as an example of how not to do things: a government that, through AI, has obtained too much power over its citizens and uses it to oppress people. At the same time, however, these agreements with China are necessary. Not only does China have a large AI sector and lots of money, it also has access to a huge amount of data. And that data is essential for learning computer systems. Imagine you are offered a promising AI system from China that detects cancer at an early stage. Before you use this system in a hospital, however, you want to know whether the data that feeds the detection algorithm has been collected ethically.

“You see different attitudes to AI regulation between EU countries too, for that matter. Traditionally the Germans are extremely cautious around privacy issues and the sharing of personal data and, as a result, are in favour of more stringent regulations. But a country like Estonia, for example, sees opportunities to develop itself as a forward-looking digital lab and benefits more from somewhat more relaxed stringent regulations.”

daniel-muggeDaniel Mügge

Pressing ahead with a European AI sector

“Besides this regulation of systems coming into the EU, Brussels is trying to establish a strong AI sector in Europe. Not only are European companies easier to regulate, but this European collaboration is essential if we are to integrate the huge amounts of data that are required to develop competing AI systems. Brussels is trying to press ahead with this because developments in technology are moving fast. This won’t be easy because policy-making at European level is not a quick process.”

Companies are both lobbyists and experts

“It is a challenge for Brussels to define clearly in legal terms what is and what is not covered by AI regulation. The big tech companies will play a huge role in this. Clearly they are lobbying against restrictive regulation. But, at the same time, Brussels badly needs their technical knowledge if it is to come up with effective regulations. Otherwise you could end up with regulations that are out of kilter with the technical reality and therefore don’t really help control the issues in the field of AI. In such way these companies will help decide which systems are welcome on the European market and which are not – a dynamic that is clearly open to question.”

Outlining future scenarios

“In our research we will start by mapping what’s happening at the moment: to what extent is the EU dependent on the actions of others for effective regulation? Based on that, we will outline future scenarios, giving the advantages and disadvantages of specific choices. We take the view that you mustn’t formulate one approach for all AI systems, or one set of rules that the EU always uses. Depending on the specific AI application, it may be advisable, for example, to sometimes follow the American approach, to make global agreements in another area, and in the case of other problems, to develop our own EU vision and approach.”

The research will start in September 2022 and will run for 5 years.




University of Amsterdam




            AIhub is supported by:


Related posts :



Forthcoming machine learning and AI seminars: June 2025 edition

  02 Jun 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 June and 31 July 2025.
monthly digest

AIhub monthly digest: May 2025 – materials design, object state classification, and real-time monitoring for healthcare data

  30 May 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners

  29 May 2025
Find out who won the awards presented at the International Conference on Autonomous Agents and Multiagent Systems last week.

The Good Robot podcast: Transhumanist fantasies with Alexander Thomas

  28 May 2025
In this episode, Eleanor talks to Alexander Thomas, a filmmaker and academic, about the transhumanist narrative.

Congratulations to the #ICRA2025 best paper award winners

  27 May 2025
The winners and finalists in the different categories have been announced.

#ICRA2025 social media round-up

  23 May 2025
Find out what the participants got up to at the International Conference on Robotics & Automation.

Interview with Gillian Hadfield: Normative infrastructure for AI alignment

  22 May 2025
Kumar Kshitij Patel spoke to Gillian Hadfield about her interdisciplinary research, career trajectory, path into AI alignment, law, and general thoughts on AI systems.

PitcherNet helps researchers throw strikes with AI analysis

  21 May 2025
Baltimore Orioles tasks Waterloo Engineering researchers to develop AI tech that can monitor pitchers using low-resolution video captured by smartphones



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence