ΑΙhub.org
 

Assured and Trustworthy Human-centered AI – an AAAI Fall symposium

by , and
08 December 2023



share this:
ATHAI

The Assured and Trustworthy Human-centered AI (ATHAI) symposium was held as part of the AAAI Fall Symposium Series in Arlington, VA from October 25-27, 2023. The symposium brought together three groups of stakeholders from industry, academia, and government to discuss issues related to AI assurance in different domains ranging from healthcare to defense. The symposium drew over 50 participants and consisted of a combination of invited keynote speakers, spotlight talks, and interactive panel discussions.

On Day 1, the symposium kicked off with a keynote by Professor Missy Cummings (George Mason University) titled “Developing Trustworthy AI: Lessons Learned from Self-driving Cars”. Missy shared important lessons learned from her time at the National Highway Traffic Safety Administration (NHTSA) and interacting with the autonomous vehicle industry. These lessons included topics like remembering that maintaining AI is just as important as creating AI and that human errors in operation don’t just disappear with automation but instead can get replaced with other human errors in coding. The first panel covered definitions related to AI assurance, and provided several grounding definitions while establishing the lack of consistency across the field. The second panel covered challenges and opportunities for AI test and evaluation, highlighting gaps in current evaluation strategies, but providing optimism that existing evaluation strategies can be sufficient if followed. The final panel covered industry and academic perspectives on AI assurance, suggesting ideas that could be shared across industries and highlighting a potential need for regulation.

Day 2 began with a panel of experts from domains like defense and healthcare discussing government and policy perspectives on AI assurance. This panel identified several barriers to achieving assured AI, including the lack of required standards and accepted benchmarks for assurance requirements. Fundamental questions like “what is safe and effective enough”, and issues relating to policy and regulation gaps were raised and some possible solutions were presented. Professor Fuxin Li (Oregon State University) gave a keynote titled “From Heatmaps to Structural and Counterfactual Explanations”, which highlighted his research group’s work to explain and debug deep image models, towards the goal of improving the explainability of AI systems. Matt Turek (DARPA) also gave a keynote talk titled “Towards AI Measurement Science”, with a historical lens of how we as humans have measured things over time with an eye toward the need for and possible avenues to create AI measurement science to help the field move beyond standard benchmarks and advance the current state-of-the-art.

Other highlights of the symposium included a series of 15 two-minute lighting talks of accepted papers, followed by a poster session on these papers. The poster session enabled lively discussion among participants, with research covering a wide range of topics such as tools for rapid image labeling, tools to improve AI test and evaluation, metrics and methods for evaluating AI, and an assured mobile manipulation robot that can perform clinical tasks like vital sign measurement. On the final half day, participants split into two breakout groups for more in-depth discussions and exchange of ideas. One group focused on practical next steps towards AI assurance in the medical domain, focusing on what can be done in the absence of regulatory change. The other group discussed assurance of foundation models and generative AI technologies such as large language models.

Overall, ATHAI brought together experts from diverse fields to begin building a shared understanding of the challenges and opportunities for AI assurance across different domains. These discussions were also extremely timely, given the President’s recent executive order on safe, secure, and trustworthy AI. Researchers across different backgrounds also gave us separate insights into hopes the community is pursuing for a future of AI that is safe, secure, assured, and explainable.

Brian Hu, Heather Frase, Brian Jalaian, Ariel Kapusta, Patrick Minot, Farshid Alambeigi, S. Farokh Atashzar, and Jie Ying Wu served as co-organizers of this symposium. This report was written by Brian Hu and Ariel Kapusta, with helpful inputs from Tabitha Colter.



tags: ,


Brian Hu is a staff R&D engineer and computer vision researcher at Kitware, Inc.
Brian Hu is a staff R&D engineer and computer vision researcher at Kitware, Inc.

Ariel Kapusta is an autonomous systems engineer at the MITRE Corporation.
Ariel Kapusta is an autonomous systems engineer at the MITRE Corporation.

Tabitha Colter works in AI Assurance & Operations at the MITRE Corporation.
Tabitha Colter works in AI Assurance & Operations at the MITRE Corporation.




            AIhub is supported by:


Related posts :



Congratulations to the #ICLR2024 test of time and outstanding paper award winners

The winners of the best paper awards were announced at the opening session of the conference.
08 May 2024, by

AIhub coffee corner: Responsible and trustworthy AI

The AIhub coffee corner captures the musings of AI experts over a short conversation.
07 May 2024, by

DataLike: Interview with Motunrayo Kilanko

Ndane and Isabella talk to Motunrayo Kilanko about learning on the job, projects, and apprenticeships.

Interview with Salena Torres Ashton: causality and natural language

We spoke to Salena about her research, the AAAI experience, and her career path from professional genealogist and historian to machine learning PhD student.
02 May 2024, by

5 questions schools and universities should ask before they purchase AI tech products

Every few years, an emerging technology shows up at the doorstep of schools and universities promising to transform education.
01 May 2024, by

AIhub monthly digest: April 2024 – explainable AI, access to compute, and noughts and crosses

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
30 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association