ΑΙhub.org
 

Assured and Trustworthy Human-centered AI – an AAAI Fall symposium


by , and
08 December 2023



share this:
ATHAI

The Assured and Trustworthy Human-centered AI (ATHAI) symposium was held as part of the AAAI Fall Symposium Series in Arlington, VA from October 25-27, 2023. The symposium brought together three groups of stakeholders from industry, academia, and government to discuss issues related to AI assurance in different domains ranging from healthcare to defense. The symposium drew over 50 participants and consisted of a combination of invited keynote speakers, spotlight talks, and interactive panel discussions.

On Day 1, the symposium kicked off with a keynote by Professor Missy Cummings (George Mason University) titled “Developing Trustworthy AI: Lessons Learned from Self-driving Cars”. Missy shared important lessons learned from her time at the National Highway Traffic Safety Administration (NHTSA) and interacting with the autonomous vehicle industry. These lessons included topics like remembering that maintaining AI is just as important as creating AI and that human errors in operation don’t just disappear with automation but instead can get replaced with other human errors in coding. The first panel covered definitions related to AI assurance, and provided several grounding definitions while establishing the lack of consistency across the field. The second panel covered challenges and opportunities for AI test and evaluation, highlighting gaps in current evaluation strategies, but providing optimism that existing evaluation strategies can be sufficient if followed. The final panel covered industry and academic perspectives on AI assurance, suggesting ideas that could be shared across industries and highlighting a potential need for regulation.

Day 2 began with a panel of experts from domains like defense and healthcare discussing government and policy perspectives on AI assurance. This panel identified several barriers to achieving assured AI, including the lack of required standards and accepted benchmarks for assurance requirements. Fundamental questions like “what is safe and effective enough”, and issues relating to policy and regulation gaps were raised and some possible solutions were presented. Professor Fuxin Li (Oregon State University) gave a keynote titled “From Heatmaps to Structural and Counterfactual Explanations”, which highlighted his research group’s work to explain and debug deep image models, towards the goal of improving the explainability of AI systems. Matt Turek (DARPA) also gave a keynote talk titled “Towards AI Measurement Science”, with a historical lens of how we as humans have measured things over time with an eye toward the need for and possible avenues to create AI measurement science to help the field move beyond standard benchmarks and advance the current state-of-the-art.

Other highlights of the symposium included a series of 15 two-minute lighting talks of accepted papers, followed by a poster session on these papers. The poster session enabled lively discussion among participants, with research covering a wide range of topics such as tools for rapid image labeling, tools to improve AI test and evaluation, metrics and methods for evaluating AI, and an assured mobile manipulation robot that can perform clinical tasks like vital sign measurement. On the final half day, participants split into two breakout groups for more in-depth discussions and exchange of ideas. One group focused on practical next steps towards AI assurance in the medical domain, focusing on what can be done in the absence of regulatory change. The other group discussed assurance of foundation models and generative AI technologies such as large language models.

Overall, ATHAI brought together experts from diverse fields to begin building a shared understanding of the challenges and opportunities for AI assurance across different domains. These discussions were also extremely timely, given the President’s recent executive order on safe, secure, and trustworthy AI. Researchers across different backgrounds also gave us separate insights into hopes the community is pursuing for a future of AI that is safe, secure, assured, and explainable.

Brian Hu, Heather Frase, Brian Jalaian, Ariel Kapusta, Patrick Minot, Farshid Alambeigi, S. Farokh Atashzar, and Jie Ying Wu served as co-organizers of this symposium. This report was written by Brian Hu and Ariel Kapusta, with helpful inputs from Tabitha Colter.



tags: ,


Brian Hu is a staff R&D engineer and computer vision researcher at Kitware, Inc.
Brian Hu is a staff R&D engineer and computer vision researcher at Kitware, Inc.

Ariel Kapusta is an autonomous systems engineer at the MITRE Corporation.
Ariel Kapusta is an autonomous systems engineer at the MITRE Corporation.

Tabitha Colter works in AI Assurance & Operations at the MITRE Corporation.
Tabitha Colter works in AI Assurance & Operations at the MITRE Corporation.




            AIhub is supported by:



Related posts :



Congratulations to the #AIES2025 best paper award winners!

  21 Oct 2025
The four winners of best paper prizes were announced during the opening ceremony at AIES.

From the telegraph to AI, our communications systems have always had hidden environmental costs

  20 Oct 2025
Drawing parallels between new technologies of the past and today.

What’s on the programme at #AIES2025?

  17 Oct 2025
The conference on AI, ethics, and society will take place in Madrid from 20-22 October.

Generative AI model maps how a new antibiotic targets gut bacteria

  16 Oct 2025
Researchers used a GenAI model to reveal how a narrow-spectrum antibiotic attacks disease-causing bacteria.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

Applying machine learning to chip design and manufacturing: interview with Lorenzo Servadei

  14 Oct 2025
Find out how Lorenzo and his team are using ML and Electronic Design Automation.

Why we should be skeptical of the hasty global push to test 15-year-olds’ AI literacy in 2029

  13 Oct 2025
Are schools set to become testing grounds for AI developments?

Machine learning for atomic-scale simulations: balancing speed and physical laws

How much underlying physics can we safely “shortcut” without breaking a simulation?



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence