ΑΙhub.org
 

Interview with Amar Halilovic: Explainable AI for robotics


by
10 June 2025



share this:

In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In this latest interview, we hear from Amar Halilovic, a PhD student at Ulm University.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I’m currently a PhD student at Ulm University in Germany, where I focus on explainable AI for robotics. My research investigates how robots can generate explanations of their actions in a way that aligns with human preferences and expectations, particularly in navigation tasks.

Could you give us an overview of the research you’ve carried out so far during your PhD?

So far, I’ve developed a framework for environmental explanations of robot actions and decisions, especially when things go wrong. I have explored black-box and generative approaches for the generation of textual and visual explanations. Furthermore, I have been working on planning of different explanation attributes, such as timing, representation, duration, etc. Lately, I’ve been working on methods for dynamically selecting the best explanation strategy depending on the context and user preferences.

Is there an aspect of your research that has been particularly interesting?

Yes, I find it fascinating how people interpret robot behavior differently depending on the urgency or failure context. It’s been especially rewarding to study how explanation expectations shift in different situations and how we can tailor explanation timing and content accordingly.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

Next, I’ll be extending the framework to incorporate real-time adaptation, enabling robots to learn from user feedback and adjust their explanations on the fly. I’m also planning more user studies to validate the effectiveness of these explanations in real-world human-robot interaction settings.

Amar with his poster at the AAAI/SIGAI Doctoral Consortium at AAAI 2025.

What made you want to study AI, and, in particular, explainable robot navigation?

I’ve always been interested in the intersection of humans and machines. During my studies, I realized that making AI systems understandable isn’t just a technical challenge—it’s key to trust and usability. Robot navigation struck me as a particularly compelling area because decisions are spatial and visual, making explanations both challenging and impactful.

What advice would you give to someone thinking of doing a PhD in the field?

Pick a topic that genuinely excites you—you’ll be living with it for several years! Also, build a support network of mentors and peers. It’s easy to get lost in the technical work, but collaboration and feedback are vital.

Could you tell us an interesting (non-AI related) fact about you?

I have lived and studied in four different countries.

About Amar

Amar is a PhD student at the Institute of Artificial Intelligence of Ulm University in Germany. His research focuses on Explainable Artificial Intelligence (XAI) in Human-Robot Interaction (HRI), particularly how robots can generate context-sensitive explanations for navigation decisions. He combines symbolic planning and machine learning to build explainable robot systems that adapt to human preferences and different contexts. Before starting his PhD, he studied Electrical Engineering at the University of Sarajevo in Sarajevo, Bosnia and Herzegovina, and Computer Science at Mälardalen University in Västerås, Sweden. Outside academia, Amar enjoys travelling, photography, and exploring connections between technology and society.



tags: , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Interview with Flávia Carvalhido: Responsible multimodal AI

  12 Aug 2025
We hear from PhD student Flávia about her research, what inspired her to study AI, and her experience at AAAI 2025.

Using AI to speed up landslide detection

  11 Aug 2025
Researchers are using AI to speed up landslide detection following major earthquakes and extreme rainfall events.

IJCAI in Canada: 90-second pitches from the next generation of AI researchers

  08 Aug 2025
Find out about some of the interesting research taking place across Canada.

AI for the ancient world: how a new machine learning system can help make sense of Latin inscriptions

  08 Aug 2025
System retrieves textual and contextual parallels, makes use of visual details, and can generate speculative text to fill gaps in inscriptions.

Smart microscope captures aggregation of misfolded proteins

  07 Aug 2025
EPFL researchers have developed a microscope that can predict the onset of misfolded protein aggregation.

Interview with Shaghayegh (Shirley) Shajarian: Applying generative AI to computer networks

  05 Aug 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

How AI can help protect bees from dangerous parasites

  04 Aug 2025
Tiny but mighty, honeybees play a crucial role in our ecosystems, pollinating various plants and crops.

The Machine Ethics podcast: AI Ethics, Risks and Safety Conference 2025

Listen to a special episode recorded at the AI Ethics, Risks and Safety Conference.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence