ΑΙhub.org
 

Interview with Kayla Boggess: Explainable AI for more accessible and understandable technologies


by
14 February 2025



share this:


In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In the second of our interviews with the 2025 cohort, we hear from Kayla Boggess, a PhD student at the University of Virginia, and find out more about her research on explainable AI.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I’m currently working on my PhD at the University of Virginia. I’m a member of the University of Virginia Link Lab, which is a multi-disciplinary lab focused on cyber-physical systems. There’s individuals from departments across the University of Virginia that all work in the lab, so I’ve had the opportunity to work with other researchers in computer science, system engineering, psychology, and even law during my time there. We work on real-world problems in robotics, autonomous vehicles, health care, internet of things, and smart cities.

Specifically, I work in explainable AI. My goal is to make advanced technologies more accessible and understandable for users by increasing system transparency, building user trust, and enhancing collaboration with autonomous systems. I have worked to create concept-based natural language explanations for multi-agent reinforcement learning policies and applied these methods to domains like autonomous vehicles and robotic search and rescue.

Could you give us an overview of the research you’ve carried out so far during your PhD?

My research in explainable AI focuses on two key areas: explanations for multi-agent reinforcement learning (MARL) and human-centric explanations. First, I developed methods to generate comprehensive policy summaries that clarify agent behaviors within a MARL policy and provide concept-based natural language explanations to address user queries about agent decisions such as when, what, and why not. Second, I leveraged user preferences, social concepts, subjective evaluations, and information presentation methods to create more effective explanations tailored to human needs.

Is there an aspect of your research that has been particularly interesting?

I particularly enjoy the evaluation aspect of research. So, the computational experiments and user studies that are run once the algorithms are developed. Since I work in the generation of explanations there is a lot of evaluation with real-world users that needs to happen to show that the explanation we produced is actually usable and helpful. I always find it interesting how people react to the systems, what they wish the system did and didn’t do, how they would change how the information is laid out, and what they take away from the explanation. People are much more difficult to deal with than algorithms, so I like to piece together the best way to help them and get them to collaborate with AI systems. Sometimes it’s significantly more difficult than building the system itself.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

In the future, I would like to improve explainable methods for complex systems by focusing further on the development of robust algorithms and the integration of human factors. I would like to apply my methods to more complex, real-world systems like autonomous vehicles and large language models. My goal is to help ensure that understandable and satisfactory decisions can be made by AI systems for a broad
audience.

What made you want to study AI, and in particular the area of explainable AI?

I actually have two undergraduate degrees, one in computer science and the other in English Literature. I originally wanted to get my PhD in English, but after trying to apply to both English and computer science programs, I found that I had better opportunities on the computer science side. I was offered a position in the first cohort of the University of Virginia Link Lab’s NRT program and I took the offer because it was going to allow me to do interdisciplinary work. I wasn’t particularly sure what I was going to study yet, but I knew I wanted it to be somewhere between computer science and English. We were able to rotate through multiple advisors in my first year, so I didn’t have to commit to anything directly to begin with. My advisor approached me with an explainable AI project that she wanted to get off the ground and felt like I was a good fit with my background. I enjoyed the project so much that I decided to continue working on it.

What advice would you give to someone thinking of doing a PhD in the field?

I would say that a new PhD student shouldn’t tie themselves down to one problem too quickly. Take time to explore different fields and find something you are interested in. Just because you come into your PhD thinking you are going to do one thing doesn’t mean you’ll end your PhD working on that same problem. Play to your strengths as a researcher and don’t just pick a field because you think it’s trendy. Be ready to walk away from a problem if you have to, but don’t be afraid to take on new projects and try things out. You never know who you will meet and how things will work out.

Could you tell us an interesting (non-AI related) fact about you?

I’m self-taught in sewing. In my down time, I like to make all sorts of things like jackets, dresses, and pants. It’s something that keeps my mind engaged in problem-solving while letting me do something creative. I’ve actually won several awards for my pieces in the past.

About Kayla

Kayla Boggess is a PhD student in the Department of Computer Science at the University of Virginia, advised by Dr Lu Feng. Kayla’s research is at the intersection of explainable artificial intelligence, human-in-the-loop cyber-physical systems, and formal methods. She is interested in developing human-focused cyber-physical systems using natural language explanations for various single and multi-agent domains such as search and rescue and autonomous vehicles. Her research has led to multiple publications at top-tier computer science conferences such as IJCAI, ICCPS, and IROS. Kayla is a recipient of the prestigious NSF NRT Graduate Research Fellowship and a CPS Rising Star Award.

Further reading



tags: , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



An introduction to science communication at #AAAI2025

  18 Feb 2025
Find out more about our forthcoming training session at AAAI on 26 February 2025.

The Good Robot podcast: Critiquing tech through comedy with Laura Allcorn

  17 Feb 2025
Eleanor and Kerry chat to Laura Allcorn about how she pairs humour and entertainment with participatory public engagement to raise awareness of AI use cases

The Machine Ethics podcast: Running faster with Enrico Panai

This episode, Ben chats to Enrico Panai about different aspects of AI ethics.

Diffusion model predicts 3D genomic structures

  12 Feb 2025
A new approach predicts how a specific DNA sequence will arrange itself in the cell nucleus.

Interview with Kunpeng Xu: Kernel representation learning for time series

  11 Feb 2025
We hear from AAAI/SIGAI doctoral consortium participant Kunpeng Xu.

The Children’s AI Summit – an event from The Turing Institute

  10 Feb 2025
Find out more about this event held ahead of the Paris AI Action Summit.
coffee corner

AIhub coffee corner: Bad practice in the publication world

  07 Feb 2025
The AIhub coffee corner captures the musings of AI experts over a short conversation.

Explained: Generative AI’s environmental impact

  06 Feb 2025
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association