ΑΙhub.org
 

The TAILOR roadmap for trustworthy AI


by
21 October 2022



share this:

Fredrik Heintz sat on a couch“Legislation and standards must be based on knowledge. This is where we researchers can contribute, providing information about the current forefront of research, and making well-grounded decisions possible. It’s important that experts have the opportunity to influence questions of this type,” says Fredrik Heintz. Photo credit: Anna Nilsen

By Anders Törneholm

The place of artificial intelligence, AI, in our everyday life is increasing and many researchers believe that what we have seen so far is only the beginning. However, AI must be trustworthy in all situations. Linköping University is coordinating TAILOR, a EU project that has drawn up a research-based roadmap intended to guide research funding bodies and decision-makers towards the trustworthy AI of the future.

“The development of artificial intelligence is in its infancy. When we look back at what we are doing today in 50 years, we will find it pretty primitive. In other words, most of the field remains to be discovered. That’s why it’s important to lay the foundation of trustworthy AI now,” says Fredrik Heintz, professor of artificial intelligence at LiU, and coordinator of the TAILOR project.

TAILOR is one of six research networks set up by the EU to strengthen research capacity and develop the AI of the future. The foundation of trustworthy AI is being laid by TAILOR, by drawing up a framework, guidelines and a specification of the needs of the AI research community. “TAILOR” is an abbreviation of Foundations of Trustworthy AI – integrating, learning, optimisation and reasoning.

Three criteria

The roadmap now presented by TAILOR is the first step on the way to standardisation, where the idea is that decision-makers and research funding bodies can gain insight into what is required to develop trustworthy AI. Fredrik Heintz believes that it is a good idea to show that many research problems must be solved before this can be achieved.

The researchers have defined three criteria for trustworthy AI: it must conform to laws and regulations, it must satisfy several ethical principles, and its implementation it must be robust and safe. Fredrik Heintz points out that these criteria pose major challenges, in particular the implementation of the ethical principles.

“Take justice, for example. Does this mean an equal distribution of resources or that all actors receive the resources needed to bring them all to the same level? We are facing major long-term questions, and it will take time before they are answered. Remember – the definition of justice has been debated by philosophers and scholars for hundreds of years,” says Fredrik Heintz.

Human centered

The project will focus on large comprehensive research questions, and will attempt to find standards that all who work with AI can adopt. But Fredrik Heintz is convinced that we can only achieve this if basic research into AI is given priority.

“People often regard AI as a technology issue, but what’s really important is whether we gain societal benefit from it. If we are to obtain AI that can be trusted and that functions well in society, we must make sure that it is centred on people,” says Fredrik Heintz.

Many of the legal proposals written within the EU and its member states are written by legal specialists. But Fredrik Heintz believes that they lack expert knowledge within AI, which is a problem.

“Legislation and standards must be based on knowledge. This is where we researchers can contribute, providing information about the current forefront of research, and making well-grounded decisions possible. It’s important that experts have the opportunity to influence questions of this type,” says Fredrik Heintz.

The complete roadmap is available at: Strategic Research and Innovation Roadmap of trustworthy AI.




Linköping University

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

  08 Apr 2026
Francesco tells us how LLMs behave in the social network Moltbook, and what this reveals about network dynamics.

Scaling up multi-agent systems: an interview with Minghong Geng

  07 Apr 2026
We sat down with Minghong in the latest of our interviews with the 2026 AAAI/SIGAI Doctoral Consortium participants.

Forthcoming machine learning and AI seminars: April 2026 edition

  02 Apr 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 April and 31 May 2026.

#AAAI2026 invited talk: machine learning for particle physics

  01 Apr 2026
How is ML used in the search for new particles at CERN?
monthly digest

AIhub monthly digest: March 2026 – time series, multiplicity, and the history of RoboCup

  31 Mar 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

  30 Mar 2026
We launch our new series with a conversation with Ross King - a pioneer in the field of AI-enabled scientific discovery.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence