ΑΙhub.org
 

Interview with Amar Halilovic: Explainable AI for robotics


by
10 June 2025



share this:

In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In this latest interview, we hear from Amar Halilovic, a PhD student at Ulm University.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I’m currently a PhD student at Ulm University in Germany, where I focus on explainable AI for robotics. My research investigates how robots can generate explanations of their actions in a way that aligns with human preferences and expectations, particularly in navigation tasks.

Could you give us an overview of the research you’ve carried out so far during your PhD?

So far, I’ve developed a framework for environmental explanations of robot actions and decisions, especially when things go wrong. I have explored black-box and generative approaches for the generation of textual and visual explanations. Furthermore, I have been working on planning of different explanation attributes, such as timing, representation, duration, etc. Lately, I’ve been working on methods for dynamically selecting the best explanation strategy depending on the context and user preferences.

Is there an aspect of your research that has been particularly interesting?

Yes, I find it fascinating how people interpret robot behavior differently depending on the urgency or failure context. It’s been especially rewarding to study how explanation expectations shift in different situations and how we can tailor explanation timing and content accordingly.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

Next, I’ll be extending the framework to incorporate real-time adaptation, enabling robots to learn from user feedback and adjust their explanations on the fly. I’m also planning more user studies to validate the effectiveness of these explanations in real-world human-robot interaction settings.

Amar with his poster at the AAAI/SIGAI Doctoral Consortium at AAAI 2025.

What made you want to study AI, and, in particular, explainable robot navigation?

I’ve always been interested in the intersection of humans and machines. During my studies, I realized that making AI systems understandable isn’t just a technical challenge—it’s key to trust and usability. Robot navigation struck me as a particularly compelling area because decisions are spatial and visual, making explanations both challenging and impactful.

What advice would you give to someone thinking of doing a PhD in the field?

Pick a topic that genuinely excites you—you’ll be living with it for several years! Also, build a support network of mentors and peers. It’s easy to get lost in the technical work, but collaboration and feedback are vital.

Could you tell us an interesting (non-AI related) fact about you?

I have lived and studied in four different countries.

About Amar

Amar is a PhD student at the Institute of Artificial Intelligence of Ulm University in Germany. His research focuses on Explainable Artificial Intelligence (XAI) in Human-Robot Interaction (HRI), particularly how robots can generate context-sensitive explanations for navigation decisions. He combines symbolic planning and machine learning to build explainable robot systems that adapt to human preferences and different contexts. Before starting his PhD, he studied Electrical Engineering at the University of Sarajevo in Sarajevo, Bosnia and Herzegovina, and Computer Science at Mälardalen University in Västerås, Sweden. Outside academia, Amar enjoys travelling, photography, and exploring connections between technology and society.



tags: , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



What are small language models and how do they differ from large ones?

  06 Jan 2026
Let’s explore what makes SLMs and LLMs different – and how to choose the right one for your situation.

Forthcoming machine learning and AI seminars: January 2026 edition

  05 Jan 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 January and 28 February 2026.

AAAI presidential panel – AI perception versus reality video discussion

  02 Jan 2026
Watch the second panel discussion in this series from AAAI.

More than half of new articles on the internet are being written by AI

  31 Dec 2025
The line between human and machine authorship is blurring, particularly as it’s become increasingly difficult to tell whether something was written by a person or AI.
monthly digest

2025 digest of digests

  30 Dec 2025
We look back through the archives of our monthly digests to pick out some highlights from the year.
monthly digest

AIhub monthly digest: December 2025 – studying bias in AI-based recruitment tools, an image dataset for ethical AI benchmarking, and end of year com

  29 Dec 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Half of UK novelists believe AI is likely to replace their work entirely

  24 Dec 2025
A new report asks literary creatives about their views on generative AI tools and LLM-authored books.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence