ΑΙhub.org
 

Assured and Trustworthy Human-centered AI – an AAAI Fall symposium


by , and
08 December 2023



share this:
ATHAI

The Assured and Trustworthy Human-centered AI (ATHAI) symposium was held as part of the AAAI Fall Symposium Series in Arlington, VA from October 25-27, 2023. The symposium brought together three groups of stakeholders from industry, academia, and government to discuss issues related to AI assurance in different domains ranging from healthcare to defense. The symposium drew over 50 participants and consisted of a combination of invited keynote speakers, spotlight talks, and interactive panel discussions.

On Day 1, the symposium kicked off with a keynote by Professor Missy Cummings (George Mason University) titled “Developing Trustworthy AI: Lessons Learned from Self-driving Cars”. Missy shared important lessons learned from her time at the National Highway Traffic Safety Administration (NHTSA) and interacting with the autonomous vehicle industry. These lessons included topics like remembering that maintaining AI is just as important as creating AI and that human errors in operation don’t just disappear with automation but instead can get replaced with other human errors in coding. The first panel covered definitions related to AI assurance, and provided several grounding definitions while establishing the lack of consistency across the field. The second panel covered challenges and opportunities for AI test and evaluation, highlighting gaps in current evaluation strategies, but providing optimism that existing evaluation strategies can be sufficient if followed. The final panel covered industry and academic perspectives on AI assurance, suggesting ideas that could be shared across industries and highlighting a potential need for regulation.

Day 2 began with a panel of experts from domains like defense and healthcare discussing government and policy perspectives on AI assurance. This panel identified several barriers to achieving assured AI, including the lack of required standards and accepted benchmarks for assurance requirements. Fundamental questions like “what is safe and effective enough”, and issues relating to policy and regulation gaps were raised and some possible solutions were presented. Professor Fuxin Li (Oregon State University) gave a keynote titled “From Heatmaps to Structural and Counterfactual Explanations”, which highlighted his research group’s work to explain and debug deep image models, towards the goal of improving the explainability of AI systems. Matt Turek (DARPA) also gave a keynote talk titled “Towards AI Measurement Science”, with a historical lens of how we as humans have measured things over time with an eye toward the need for and possible avenues to create AI measurement science to help the field move beyond standard benchmarks and advance the current state-of-the-art.

Other highlights of the symposium included a series of 15 two-minute lighting talks of accepted papers, followed by a poster session on these papers. The poster session enabled lively discussion among participants, with research covering a wide range of topics such as tools for rapid image labeling, tools to improve AI test and evaluation, metrics and methods for evaluating AI, and an assured mobile manipulation robot that can perform clinical tasks like vital sign measurement. On the final half day, participants split into two breakout groups for more in-depth discussions and exchange of ideas. One group focused on practical next steps towards AI assurance in the medical domain, focusing on what can be done in the absence of regulatory change. The other group discussed assurance of foundation models and generative AI technologies such as large language models.

Overall, ATHAI brought together experts from diverse fields to begin building a shared understanding of the challenges and opportunities for AI assurance across different domains. These discussions were also extremely timely, given the President’s recent executive order on safe, secure, and trustworthy AI. Researchers across different backgrounds also gave us separate insights into hopes the community is pursuing for a future of AI that is safe, secure, assured, and explainable.

Brian Hu, Heather Frase, Brian Jalaian, Ariel Kapusta, Patrick Minot, Farshid Alambeigi, S. Farokh Atashzar, and Jie Ying Wu served as co-organizers of this symposium. This report was written by Brian Hu and Ariel Kapusta, with helpful inputs from Tabitha Colter.



tags: ,


Brian Hu is a staff R&D engineer and computer vision researcher at Kitware, Inc.
Brian Hu is a staff R&D engineer and computer vision researcher at Kitware, Inc.

Ariel Kapusta is an autonomous systems engineer at the MITRE Corporation.
Ariel Kapusta is an autonomous systems engineer at the MITRE Corporation.

Tabitha Colter works in AI Assurance & Operations at the MITRE Corporation.
Tabitha Colter works in AI Assurance & Operations at the MITRE Corporation.




            AIhub is supported by:


Related posts :



Interview with Tunazzina Islam: Understand microtargeting and activity patterns on social media

  11 Mar 2025
Hear from Doctoral Consortium participant Tunazzina about her research on computational social science, natural language processing, and social media mining and analysis

Microsoft cuts data centre plans and hikes prices in push to make users carry AI costs

  10 Mar 2025
Microsoft is trying to recoup the costs by raising prices, putting ads in products, and cancelling data centre leases

Report on the future of AI research

  07 Mar 2025
Find out more about a report released by the AAAI 2025 Presidential Panel.

Andrew Barto and Richard Sutton win 2024 Turing Award

  06 Mar 2025
Pair are recognised for their pioneering reinforcement learning research.

#AAAI2025 social media round-up: part two

  05 Mar 2025
What did the participants get up to during the second half of the conference?

Visualizing nanoparticle dynamics using AI-based method

  04 Mar 2025
A team of scientists has developed a method to illuminate the dynamic behavior of nanoparticles.

Forthcoming machine learning and AI seminars: March 2025 edition

  03 Mar 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 3 March and 30 April 2025.

Congratulations to the #AAAI2025 outstanding paper award winners

  01 Mar 2025
The winners of the outstanding papers were announced at the conference during the opening ceremony.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association