In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In the second of our interviews with the 2025 cohort, we hear from Kayla Boggess, a PhD student at the University of Virginia, and find out more about her research on explainable AI.
I’m currently working on my PhD at the University of Virginia. I’m a member of the University of Virginia Link Lab, which is a multi-disciplinary lab focused on cyber-physical systems. There’s individuals from departments across the University of Virginia that all work in the lab, so I’ve had the opportunity to work with other researchers in computer science, system engineering, psychology, and even law during my time there. We work on real-world problems in robotics, autonomous vehicles, health care, internet of things, and smart cities.
Specifically, I work in explainable AI. My goal is to make advanced technologies more accessible and understandable for users by increasing system transparency, building user trust, and enhancing collaboration with autonomous systems. I have worked to create concept-based natural language explanations for multi-agent reinforcement learning policies and applied these methods to domains like autonomous vehicles and robotic search and rescue.
My research in explainable AI focuses on two key areas: explanations for multi-agent reinforcement learning (MARL) and human-centric explanations. First, I developed methods to generate comprehensive policy summaries that clarify agent behaviors within a MARL policy and provide concept-based natural language explanations to address user queries about agent decisions such as when, what, and why not. Second, I leveraged user preferences, social concepts, subjective evaluations, and information presentation methods to create more effective explanations tailored to human needs.
I particularly enjoy the evaluation aspect of research. So, the computational experiments and user studies that are run once the algorithms are developed. Since I work in the generation of explanations there is a lot of evaluation with real-world users that needs to happen to show that the explanation we produced is actually usable and helpful. I always find it interesting how people react to the systems, what they wish the system did and didn’t do, how they would change how the information is laid out, and what they take away from the explanation. People are much more difficult to deal with than algorithms, so I like to piece together the best way to help them and get them to collaborate with AI systems. Sometimes it’s significantly more difficult than building the system itself.
In the future, I would like to improve explainable methods for complex systems by focusing further on the development of robust algorithms and the integration of human factors. I would like to apply my methods to more complex, real-world systems like autonomous vehicles and large language models. My goal is to help ensure that understandable and satisfactory decisions can be made by AI systems for a broad
audience.
I actually have two undergraduate degrees, one in computer science and the other in English Literature. I originally wanted to get my PhD in English, but after trying to apply to both English and computer science programs, I found that I had better opportunities on the computer science side. I was offered a position in the first cohort of the University of Virginia Link Lab’s NRT program and I took the offer because it was going to allow me to do interdisciplinary work. I wasn’t particularly sure what I was going to study yet, but I knew I wanted it to be somewhere between computer science and English. We were able to rotate through multiple advisors in my first year, so I didn’t have to commit to anything directly to begin with. My advisor approached me with an explainable AI project that she wanted to get off the ground and felt like I was a good fit with my background. I enjoyed the project so much that I decided to continue working on it.
I would say that a new PhD student shouldn’t tie themselves down to one problem too quickly. Take time to explore different fields and find something you are interested in. Just because you come into your PhD thinking you are going to do one thing doesn’t mean you’ll end your PhD working on that same problem. Play to your strengths as a researcher and don’t just pick a field because you think it’s trendy. Be ready to walk away from a problem if you have to, but don’t be afraid to take on new projects and try things out. You never know who you will meet and how things will work out.
I’m self-taught in sewing. In my down time, I like to make all sorts of things like jackets, dresses, and pants. It’s something that keeps my mind engaged in problem-solving while letting me do something creative. I’ve actually won several awards for my pieces in the past.
![]() |
Kayla Boggess is a PhD student in the Department of Computer Science at the University of Virginia, advised by Dr Lu Feng. Kayla’s research is at the intersection of explainable artificial intelligence, human-in-the-loop cyber-physical systems, and formal methods. She is interested in developing human-focused cyber-physical systems using natural language explanations for various single and multi-agent domains such as search and rescue and autonomous vehicles. Her research has led to multiple publications at top-tier computer science conferences such as IJCAI, ICCPS, and IROS. Kayla is a recipient of the prestigious NSF NRT Graduate Research Fellowship and a CPS Rising Star Award. |