ΑΙhub.org
 

Pulling back the curtain on neural networks


by
28 January 2022



share this:

Alan Fern and colleaguePhoto credit: Johanna Carson.

By Steve Frandzel

When researchers at Oregon State University created new tools to evaluate the decision-making algorithms of an advanced artificial intelligence system, study participants assigned to use them did, indeed, find flaws in the AI’s reasoning. But once investigators instructed participants to use the tools in a more structured and rigorous way, the number of bugs they discovered increased markedly.

“That surprised us a bit, and it showed that having good tools for visualizing and interfacing with AI systems is important, but it’s only part of the story,” said Alan Fern, professor of computer science at Oregon State.

Since 2017, Fern has led a team of eight computer scientists funded by a four-year, $7.1 million grant from the Defense Advanced Research Projects Agency to develop explainable artificial intelligence, or XAI — algorithms through which humans can understand, build trust in, and manage the emerging generation of artificial intelligence systems.

Dramatic advancements in the artificial neural networks, or ANNs, at the heart of advanced AI have created a wave of powerful applications for transportation, defense, security, medicine, and other fields. ANNs comprise tens of thousands, even millions, of individual processing units. Despite their dazzling ability to analyze mountains of data en route to learning and solving problems, ANNs operate as “black boxes” whose outputs are unaccompanied by decipherable explanations or context. Their opacity baffles even those who design them, yet understanding an ANN’s “thought processes” is critical for recognizing and correcting defects.

For trivial tasks — choosing movies or online shopping — explanations don’t much matter. But when stakes are high, they’re vital. “When errors can have serious consequences, like for piloting aircraft or medical diagnoses, you don’t want to blindly trust an AI’s decisions,” Fern said. “You want an explanation; you want to know that the system is doing the right things for the right reasons.”

In one cautionary example, team member and Assistant Professor of Computer Science Fuxin Li developed an XAI algorithm that revealed serious shortcomings in a neural network trained to recognize COVID-19 from chest X-rays. It turned out that the ANN was using, among other features, a large letter “R,” which simply identified the right side of the image, in its classification of the X-rays. “With a black-box network, you’d never know that was the case,” he said.

To pull back the curtain on neural networks, Fern and his colleagues created Tug of War, a simplified version of the popular real-time strategy game StarCraft II and which involves multiple ANNs. In each game, two competing AI “agents” deploy their forces and attempt to destroy the opposing bases.

In one study, human observers trained to evaluate the game’s decision-making process watched replays of multiple games. At any time, they could freeze the action to examine the AI’s choices using the explanation user interface tools. The interface displays information such as the actions an agent considered; the predicted outcomes for each considered action; the actions taken; and the actual outcomes. For example, large discrepancies between a predicted and actual outcome — or suspect strategic choices — indicate errors in the AI’s reasoning.

The aim is to find bugs in the AI, particularly any dubious decisions that lead to losing a game. If, for instance, the ANN believes that damaged bases can sometimes be repaired (they can’t), then decisions based on that belief may be flawed. “The interface allows humans who aren’t AI experts to spot such common-sense violations and other problems,” Fern said.

At first, the reviewers were free to explore the AI using whatever ad hoc approach they chose, which resulted in a high variance of success rates across study participants. “That suggests that a structured process is an important component for being successful when using the tools,” Fern said.

So, the researchers added an after-action review (AAR) to the study. An AAR is a well-established protocol established by the military to analyze what happened and why after missions. Using an AAR designed specifically to assess AI, study participants identified far more bugs with greater consistency. “The results impressed the people at DARPA, which extended our funding for an additional year,” Fern said.

Throughout the project, the researchers also emphasized the human factors of XAI — another reason for DARPA’s continued interest. “When you’re explaining AI, you’re explaining it to a person, and you have to be sure they’re getting the greatest benefit from those explanations,” said team member Margaret Burnett, Distinguished Professor of computer science, who noted that attention to humans guided the development of the interface tools. “Explainable AI is not something you produce or consume. It’s an educational experience, and the bottom line is that we need to focus on helping the humans to solve problems.”

As they complete their work during the DARPA contract extension, Fern and Burnett, two of the original grantees, are seeking partners with whom to further validate the strategy of applying after-action reviews to the explainable AI interface tools.

In addition to collaborations with government and the military, they’re interested in pursuing connections in other important AI application domains, including agriculture, energy systems, and robotics. Fern and Burnett, along with 11 colleagues at Oregon State, recently became involved with a federally funded, $20 million AI institute for agriculture that will tackle some of the industry’s greatest challenges. Explainable AI will be part of the institute’s work.




Oregon State University

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Forthcoming machine learning and AI seminars: April 2026 edition

  02 Apr 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 April and 31 May 2026.

#AAAI2026 invited talk: machine learning for particle physics

  01 Apr 2026
How is ML used in the search for new particles at CERN?
monthly digest

AIhub monthly digest: March 2026 – time series, multiplicity, and the history of RoboCup

  31 Mar 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

  30 Mar 2026
We launch our new series with a conversation with Ross King - a pioneer in the field of AI-enabled scientific discovery.

A multi-armed robot for assisting with agricultural tasks

and   27 Mar 2026
How can a robot safely manipulate branches to reveal hidden flowers while remaining aware of interaction forces and minimizing damage?

Resource-constrained image generation and visual understanding: an interview with Aniket Roy

  26 Mar 2026
Aniket tells us about his research exploring how modern generative models can be adapted to operate efficiently while maintaining strong performance.

RWDS Big Questions: how do we highlight the role of statistics in AI?

  25 Mar 2026
Next in our series, the panel explores the statistical underpinning of AI.

A history of RoboCup with Manuela Veloso

  24 Mar 2026
Find out how RoboCup got started and how the competition has evolved, from one of the co-founders.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence