ΑΙhub.org
 

Assured and Trustworthy Human-centered AI – an AAAI Fall symposium


by , and
08 December 2023



share this:
ATHAI

The Assured and Trustworthy Human-centered AI (ATHAI) symposium was held as part of the AAAI Fall Symposium Series in Arlington, VA from October 25-27, 2023. The symposium brought together three groups of stakeholders from industry, academia, and government to discuss issues related to AI assurance in different domains ranging from healthcare to defense. The symposium drew over 50 participants and consisted of a combination of invited keynote speakers, spotlight talks, and interactive panel discussions.

On Day 1, the symposium kicked off with a keynote by Professor Missy Cummings (George Mason University) titled “Developing Trustworthy AI: Lessons Learned from Self-driving Cars”. Missy shared important lessons learned from her time at the National Highway Traffic Safety Administration (NHTSA) and interacting with the autonomous vehicle industry. These lessons included topics like remembering that maintaining AI is just as important as creating AI and that human errors in operation don’t just disappear with automation but instead can get replaced with other human errors in coding. The first panel covered definitions related to AI assurance, and provided several grounding definitions while establishing the lack of consistency across the field. The second panel covered challenges and opportunities for AI test and evaluation, highlighting gaps in current evaluation strategies, but providing optimism that existing evaluation strategies can be sufficient if followed. The final panel covered industry and academic perspectives on AI assurance, suggesting ideas that could be shared across industries and highlighting a potential need for regulation.

Day 2 began with a panel of experts from domains like defense and healthcare discussing government and policy perspectives on AI assurance. This panel identified several barriers to achieving assured AI, including the lack of required standards and accepted benchmarks for assurance requirements. Fundamental questions like “what is safe and effective enough”, and issues relating to policy and regulation gaps were raised and some possible solutions were presented. Professor Fuxin Li (Oregon State University) gave a keynote titled “From Heatmaps to Structural and Counterfactual Explanations”, which highlighted his research group’s work to explain and debug deep image models, towards the goal of improving the explainability of AI systems. Matt Turek (DARPA) also gave a keynote talk titled “Towards AI Measurement Science”, with a historical lens of how we as humans have measured things over time with an eye toward the need for and possible avenues to create AI measurement science to help the field move beyond standard benchmarks and advance the current state-of-the-art.

Other highlights of the symposium included a series of 15 two-minute lighting talks of accepted papers, followed by a poster session on these papers. The poster session enabled lively discussion among participants, with research covering a wide range of topics such as tools for rapid image labeling, tools to improve AI test and evaluation, metrics and methods for evaluating AI, and an assured mobile manipulation robot that can perform clinical tasks like vital sign measurement. On the final half day, participants split into two breakout groups for more in-depth discussions and exchange of ideas. One group focused on practical next steps towards AI assurance in the medical domain, focusing on what can be done in the absence of regulatory change. The other group discussed assurance of foundation models and generative AI technologies such as large language models.

Overall, ATHAI brought together experts from diverse fields to begin building a shared understanding of the challenges and opportunities for AI assurance across different domains. These discussions were also extremely timely, given the President’s recent executive order on safe, secure, and trustworthy AI. Researchers across different backgrounds also gave us separate insights into hopes the community is pursuing for a future of AI that is safe, secure, assured, and explainable.

Brian Hu, Heather Frase, Brian Jalaian, Ariel Kapusta, Patrick Minot, Farshid Alambeigi, S. Farokh Atashzar, and Jie Ying Wu served as co-organizers of this symposium. This report was written by Brian Hu and Ariel Kapusta, with helpful inputs from Tabitha Colter.



tags: ,


Brian Hu is a staff R&D engineer and computer vision researcher at Kitware, Inc.
Brian Hu is a staff R&D engineer and computer vision researcher at Kitware, Inc.

Ariel Kapusta is an autonomous systems engineer at the MITRE Corporation.
Ariel Kapusta is an autonomous systems engineer at the MITRE Corporation.

Tabitha Colter works in AI Assurance & Operations at the MITRE Corporation.
Tabitha Colter works in AI Assurance & Operations at the MITRE Corporation.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Forthcoming machine learning and AI seminars: April 2026 edition

  02 Apr 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 April and 31 May 2026.

#AAAI2026 invited talk: machine learning for particle physics

  01 Apr 2026
How is ML used in the search for new particles at CERN?
monthly digest

AIhub monthly digest: March 2026 – time series, multiplicity, and the history of RoboCup

  31 Mar 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

  30 Mar 2026
We launch our new series with a conversation with Ross King - a pioneer in the field of AI-enabled scientific discovery.

A multi-armed robot for assisting with agricultural tasks

and   27 Mar 2026
How can a robot safely manipulate branches to reveal hidden flowers while remaining aware of interaction forces and minimizing damage?

Resource-constrained image generation and visual understanding: an interview with Aniket Roy

  26 Mar 2026
Aniket tells us about his research exploring how modern generative models can be adapted to operate efficiently while maintaining strong performance.

RWDS Big Questions: how do we highlight the role of statistics in AI?

  25 Mar 2026
Next in our series, the panel explores the statistical underpinning of AI.

A history of RoboCup with Manuela Veloso

  24 Mar 2026
Find out how RoboCup got started and how the competition has evolved, from one of the co-founders.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence