ΑΙhub.org
 

Pulling back the curtain on neural networks

by
28 January 2022



share this:

Alan Fern and colleaguePhoto credit: Johanna Carson.

By Steve Frandzel

When researchers at Oregon State University created new tools to evaluate the decision-making algorithms of an advanced artificial intelligence system, study participants assigned to use them did, indeed, find flaws in the AI’s reasoning. But once investigators instructed participants to use the tools in a more structured and rigorous way, the number of bugs they discovered increased markedly.

“That surprised us a bit, and it showed that having good tools for visualizing and interfacing with AI systems is important, but it’s only part of the story,” said Alan Fern, professor of computer science at Oregon State.

Since 2017, Fern has led a team of eight computer scientists funded by a four-year, $7.1 million grant from the Defense Advanced Research Projects Agency to develop explainable artificial intelligence, or XAI — algorithms through which humans can understand, build trust in, and manage the emerging generation of artificial intelligence systems.

Dramatic advancements in the artificial neural networks, or ANNs, at the heart of advanced AI have created a wave of powerful applications for transportation, defense, security, medicine, and other fields. ANNs comprise tens of thousands, even millions, of individual processing units. Despite their dazzling ability to analyze mountains of data en route to learning and solving problems, ANNs operate as “black boxes” whose outputs are unaccompanied by decipherable explanations or context. Their opacity baffles even those who design them, yet understanding an ANN’s “thought processes” is critical for recognizing and correcting defects.

For trivial tasks — choosing movies or online shopping — explanations don’t much matter. But when stakes are high, they’re vital. “When errors can have serious consequences, like for piloting aircraft or medical diagnoses, you don’t want to blindly trust an AI’s decisions,” Fern said. “You want an explanation; you want to know that the system is doing the right things for the right reasons.”

In one cautionary example, team member and Assistant Professor of Computer Science Fuxin Li developed an XAI algorithm that revealed serious shortcomings in a neural network trained to recognize COVID-19 from chest X-rays. It turned out that the ANN was using, among other features, a large letter “R,” which simply identified the right side of the image, in its classification of the X-rays. “With a black-box network, you’d never know that was the case,” he said.

To pull back the curtain on neural networks, Fern and his colleagues created Tug of War, a simplified version of the popular real-time strategy game StarCraft II and which involves multiple ANNs. In each game, two competing AI “agents” deploy their forces and attempt to destroy the opposing bases.

In one study, human observers trained to evaluate the game’s decision-making process watched replays of multiple games. At any time, they could freeze the action to examine the AI’s choices using the explanation user interface tools. The interface displays information such as the actions an agent considered; the predicted outcomes for each considered action; the actions taken; and the actual outcomes. For example, large discrepancies between a predicted and actual outcome — or suspect strategic choices — indicate errors in the AI’s reasoning.

The aim is to find bugs in the AI, particularly any dubious decisions that lead to losing a game. If, for instance, the ANN believes that damaged bases can sometimes be repaired (they can’t), then decisions based on that belief may be flawed. “The interface allows humans who aren’t AI experts to spot such common-sense violations and other problems,” Fern said.

At first, the reviewers were free to explore the AI using whatever ad hoc approach they chose, which resulted in a high variance of success rates across study participants. “That suggests that a structured process is an important component for being successful when using the tools,” Fern said.

So, the researchers added an after-action review (AAR) to the study. An AAR is a well-established protocol established by the military to analyze what happened and why after missions. Using an AAR designed specifically to assess AI, study participants identified far more bugs with greater consistency. “The results impressed the people at DARPA, which extended our funding for an additional year,” Fern said.

Throughout the project, the researchers also emphasized the human factors of XAI — another reason for DARPA’s continued interest. “When you’re explaining AI, you’re explaining it to a person, and you have to be sure they’re getting the greatest benefit from those explanations,” said team member Margaret Burnett, Distinguished Professor of computer science, who noted that attention to humans guided the development of the interface tools. “Explainable AI is not something you produce or consume. It’s an educational experience, and the bottom line is that we need to focus on helping the humans to solve problems.”

As they complete their work during the DARPA contract extension, Fern and Burnett, two of the original grantees, are seeking partners with whom to further validate the strategy of applying after-action reviews to the explainable AI interface tools.

In addition to collaborations with government and the military, they’re interested in pursuing connections in other important AI application domains, including agriculture, energy systems, and robotics. Fern and Burnett, along with 11 colleagues at Oregon State, recently became involved with a federally funded, $20 million AI institute for agriculture that will tackle some of the industry’s greatest challenges. Explainable AI will be part of the institute’s work.




Oregon State University




            AIhub is supported by:


Related posts :



Are emergent abilities of large language models a mirage? – Interview with Brando Miranda

We hear about work that won a NeurIPS 2023 outstanding paper award.
25 April 2024, by

We built an AI tool to help set priorities for conservation in Madagascar: what we found

Daniele Silvestro has developed a tool that can help identify conservation and restoration priorities.
24 April 2024, by

Interview with Mike Lee: Communicating AI decision-making through demonstrations

We hear from AAAI/SIGAI Doctoral Consortium participant Mike Lee about his research on explainable AI.
23 April 2024, by

Machine learning viability modelling of vertical-axis wind turbines

Researchers have used a genetic learning algorithm to identify optimal pitch profiles for the turbine blades.
22 April 2024, by

The Machine Ethics podcast: What is AI? Volume 3

This is a bonus episode looking back over answers to our question: What is AI?
19 April 2024, by

DataLike: Interview with Tẹjúmádé Àfọ̀njá

"I place an emphasis on wellness and meticulously plan my schedule to ensure I can make meaningful contributions to what's important to me."




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association