ΑΙhub.org
 

The TAILOR roadmap for trustworthy AI


by
21 October 2022



share this:

Fredrik Heintz sat on a couch“Legislation and standards must be based on knowledge. This is where we researchers can contribute, providing information about the current forefront of research, and making well-grounded decisions possible. It’s important that experts have the opportunity to influence questions of this type,” says Fredrik Heintz. Photo credit: Anna Nilsen

By Anders Törneholm

The place of artificial intelligence, AI, in our everyday life is increasing and many researchers believe that what we have seen so far is only the beginning. However, AI must be trustworthy in all situations. Linköping University is coordinating TAILOR, a EU project that has drawn up a research-based roadmap intended to guide research funding bodies and decision-makers towards the trustworthy AI of the future.

“The development of artificial intelligence is in its infancy. When we look back at what we are doing today in 50 years, we will find it pretty primitive. In other words, most of the field remains to be discovered. That’s why it’s important to lay the foundation of trustworthy AI now,” says Fredrik Heintz, professor of artificial intelligence at LiU, and coordinator of the TAILOR project.

TAILOR is one of six research networks set up by the EU to strengthen research capacity and develop the AI of the future. The foundation of trustworthy AI is being laid by TAILOR, by drawing up a framework, guidelines and a specification of the needs of the AI research community. “TAILOR” is an abbreviation of Foundations of Trustworthy AI – integrating, learning, optimisation and reasoning.

Three criteria

The roadmap now presented by TAILOR is the first step on the way to standardisation, where the idea is that decision-makers and research funding bodies can gain insight into what is required to develop trustworthy AI. Fredrik Heintz believes that it is a good idea to show that many research problems must be solved before this can be achieved.

The researchers have defined three criteria for trustworthy AI: it must conform to laws and regulations, it must satisfy several ethical principles, and its implementation it must be robust and safe. Fredrik Heintz points out that these criteria pose major challenges, in particular the implementation of the ethical principles.

“Take justice, for example. Does this mean an equal distribution of resources or that all actors receive the resources needed to bring them all to the same level? We are facing major long-term questions, and it will take time before they are answered. Remember – the definition of justice has been debated by philosophers and scholars for hundreds of years,” says Fredrik Heintz.

Human centered

The project will focus on large comprehensive research questions, and will attempt to find standards that all who work with AI can adopt. But Fredrik Heintz is convinced that we can only achieve this if basic research into AI is given priority.

“People often regard AI as a technology issue, but what’s really important is whether we gain societal benefit from it. If we are to obtain AI that can be trusted and that functions well in society, we must make sure that it is centred on people,” says Fredrik Heintz.

Many of the legal proposals written within the EU and its member states are written by legal specialists. But Fredrik Heintz believes that they lack expert knowledge within AI, which is a problem.

“Legislation and standards must be based on knowledge. This is where we researchers can contribute, providing information about the current forefront of research, and making well-grounded decisions possible. It’s important that experts have the opportunity to influence questions of this type,” says Fredrik Heintz.

The complete roadmap is available at: Strategic Research and Innovation Roadmap of trustworthy AI.




Linköping University




            AIhub is supported by:


Related posts :



monthly digest

AIhub monthly digest: March 2025 – human-allied AI, differential privacy, and social media microtargeting

  28 Mar 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

AI ring tracks spelled words in American Sign Language

  27 Mar 2025
In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling.

How AI images are ‘flattening’ Indigenous cultures – creating a new form of tech colonialism

  26 Mar 2025
AI-generated stock images that claim to depict “Indigenous Australians”, don’t resemble Aboriginal and Torres Strait Islander peoples.

Interview with Lea Demelius: Researching differential privacy

  25 Mar 2025
We hear from doctoral consortium participant Lea Demelius who is investigating the trade-offs and synergies that arise between various requirements for trustworthy AI.

The Machine Ethics podcast: Careful technology with Rachel Coldicutt

This episode, Ben chats to Rachel Coldicutt about AI taxonomy, innovating for everyone not just the few, responsibilities of researchers, and more.

Interview with AAAI Fellow Roberto Navigli: multilingual natural language processing

  21 Mar 2025
Roberto tells us about his career path, some big research projects he’s led, and why it’s important to follow your passion.

Museums have tons of data, and AI could make it more accessible − but standardizing and organizing it across fields won’t be easy

  20 Mar 2025
How can AI models help organize large amounts of data from different collections, and what are the challenges?

Shlomo Zilberstein wins the 2025 ACM/SIGAI Autonomous Agents Research Award

  19 Mar 2025
Congratulations to Shlomo Zilberstein on winning this prestigious award!




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association