Developments in the field of artificial intelligence (AI) are moving quickly. The EU is working hard to establish rules around AI and to determine which systems are welcome and which are not. But how does the EU do this when the biggest players, the US and China, often have different ethical views? Political economist Daniel Mügge and his team will conduct research into how the EU conducts its ‘AI diplomacy’ and will sketch potential future scenarios.
“Our research is essentially about regulation around AI”, says political economist Daniel Mügge. “About how the EU approaches this issue with the major players globally and potential future scenarios.” We talked to Mügge about his new project, for which he has been awarded an NWO Vici grant.
“Regulation around AI is necessary due to concerns over privacy and autonomy, potential discrimination of individuals and unwanted military applications, for example. But how rules can be established, and how effective they will be, depends on developments in the rest of the world. Because, clearly, AI is a technology that is not limited by borders. It’s in social media and digital services, it’s not contained within walls and borders. China and the US, the biggest players in the field of AI, have a major impact on the systems that enter the European market. In our research we analyse how the EU approaches these major players when establishing rules.”
“There are currently a lot of discussions underway with the US over the establishment of ethical frameworks around AI. These frameworks are designed to prevent discrimination, for example, and to limit the power of large companies like Google and Facebook. How successful this will be, however, is currently open to debate.
“And making agreements with China is even harder. The country is often seen in this regard as an example of how not to do things: a government that, through AI, has obtained too much power over its citizens and uses it to oppress people. At the same time, however, these agreements with China are necessary. Not only does China have a large AI sector and lots of money, it also has access to a huge amount of data. And that data is essential for learning computer systems. Imagine you are offered a promising AI system from China that detects cancer at an early stage. Before you use this system in a hospital, however, you want to know whether the data that feeds the detection algorithm has been collected ethically.
“You see different attitudes to AI regulation between EU countries too, for that matter. Traditionally the Germans are extremely cautious around privacy issues and the sharing of personal data and, as a result, are in favour of more stringent regulations. But a country like Estonia, for example, sees opportunities to develop itself as a forward-looking digital lab and benefits more from somewhat more relaxed stringent regulations.”
Daniel Mügge
“Besides this regulation of systems coming into the EU, Brussels is trying to establish a strong AI sector in Europe. Not only are European companies easier to regulate, but this European collaboration is essential if we are to integrate the huge amounts of data that are required to develop competing AI systems. Brussels is trying to press ahead with this because developments in technology are moving fast. This won’t be easy because policy-making at European level is not a quick process.”
“It is a challenge for Brussels to define clearly in legal terms what is and what is not covered by AI regulation. The big tech companies will play a huge role in this. Clearly they are lobbying against restrictive regulation. But, at the same time, Brussels badly needs their technical knowledge if it is to come up with effective regulations. Otherwise you could end up with regulations that are out of kilter with the technical reality and therefore don’t really help control the issues in the field of AI. In such way these companies will help decide which systems are welcome on the European market and which are not – a dynamic that is clearly open to question.”
“In our research we will start by mapping what’s happening at the moment: to what extent is the EU dependent on the actions of others for effective regulation? Based on that, we will outline future scenarios, giving the advantages and disadvantages of specific choices. We take the view that you mustn’t formulate one approach for all AI systems, or one set of rules that the EU always uses. Depending on the specific AI application, it may be advisable, for example, to sometimes follow the American approach, to make global agreements in another area, and in the case of other problems, to develop our own EU vision and approach.”
The research will start in September 2022 and will run for 5 years.