ΑΙhub.org
 

The TAILOR roadmap for trustworthy AI


by
21 October 2022



share this:

Fredrik Heintz sat on a couch“Legislation and standards must be based on knowledge. This is where we researchers can contribute, providing information about the current forefront of research, and making well-grounded decisions possible. It’s important that experts have the opportunity to influence questions of this type,” says Fredrik Heintz. Photo credit: Anna Nilsen

By Anders Törneholm

The place of artificial intelligence, AI, in our everyday life is increasing and many researchers believe that what we have seen so far is only the beginning. However, AI must be trustworthy in all situations. Linköping University is coordinating TAILOR, a EU project that has drawn up a research-based roadmap intended to guide research funding bodies and decision-makers towards the trustworthy AI of the future.

“The development of artificial intelligence is in its infancy. When we look back at what we are doing today in 50 years, we will find it pretty primitive. In other words, most of the field remains to be discovered. That’s why it’s important to lay the foundation of trustworthy AI now,” says Fredrik Heintz, professor of artificial intelligence at LiU, and coordinator of the TAILOR project.

TAILOR is one of six research networks set up by the EU to strengthen research capacity and develop the AI of the future. The foundation of trustworthy AI is being laid by TAILOR, by drawing up a framework, guidelines and a specification of the needs of the AI research community. “TAILOR” is an abbreviation of Foundations of Trustworthy AI – integrating, learning, optimisation and reasoning.

Three criteria

The roadmap now presented by TAILOR is the first step on the way to standardisation, where the idea is that decision-makers and research funding bodies can gain insight into what is required to develop trustworthy AI. Fredrik Heintz believes that it is a good idea to show that many research problems must be solved before this can be achieved.

The researchers have defined three criteria for trustworthy AI: it must conform to laws and regulations, it must satisfy several ethical principles, and its implementation it must be robust and safe. Fredrik Heintz points out that these criteria pose major challenges, in particular the implementation of the ethical principles.

“Take justice, for example. Does this mean an equal distribution of resources or that all actors receive the resources needed to bring them all to the same level? We are facing major long-term questions, and it will take time before they are answered. Remember – the definition of justice has been debated by philosophers and scholars for hundreds of years,” says Fredrik Heintz.

Human centered

The project will focus on large comprehensive research questions, and will attempt to find standards that all who work with AI can adopt. But Fredrik Heintz is convinced that we can only achieve this if basic research into AI is given priority.

“People often regard AI as a technology issue, but what’s really important is whether we gain societal benefit from it. If we are to obtain AI that can be trusted and that functions well in society, we must make sure that it is centred on people,” says Fredrik Heintz.

Many of the legal proposals written within the EU and its member states are written by legal specialists. But Fredrik Heintz believes that they lack expert knowledge within AI, which is a problem.

“Legislation and standards must be based on knowledge. This is where we researchers can contribute, providing information about the current forefront of research, and making well-grounded decisions possible. It’s important that experts have the opportunity to influence questions of this type,” says Fredrik Heintz.

The complete roadmap is available at: Strategic Research and Innovation Roadmap of trustworthy AI.




Linköping University




            AIhub is supported by:


Related posts :



#ICRA2025 social media round-up

  23 May 2025
Find out what the participants got up to at the International Conference on Robotics & Automation.

Interview with Gillian Hadfield: Normative infrastructure for AI alignment

  22 May 2025
Kumar Kshitij Patel spoke to Gillian Hadfield about her interdisciplinary research, career trajectory, path into AI alignment, law, and general thoughts on AI systems.

PitcherNet helps researchers throw strikes with AI analysis

  21 May 2025
Baltimore Orioles tasks Waterloo Engineering researchers to develop AI tech that can monitor pitchers using low-resolution video captured by smartphones

Interview with Filippos Gouidis: Object state classification

  20 May 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

#AAAI2025 workshops round-up 3: Neural reasoning and mathematical discovery, and AI to accelerate science and engineering

  19 May 2025
We find out about three more of the workshops that took place at AAAI 2025.

What’s coming up at #ICRA2025?

  16 May 2025
Find out what's in store at the IEEE International Conference on Robotics & Automation, which will take place from 19-23 May.

AI Song Contest returns for 2025

  15 May 2025
This year's competition will culminate in a live award show in November.

Robot see, robot do: System learns after watching how-tos

  14 May 2025
Researchers have developed a new robotic framework that allows robots to learn tasks by watching a how-to video



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence