ΑΙhub.org
 

Interview with Aneesh Komanduri: Causality and generative modeling


by
31 July 2025



share this:

In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. In this latest interview, we hear from Aneesh Komanduri about his research, some of the projects he’s been involved in, future plans, and his experience at the AAAI/SIGAI Doctoral Consortium.

Could you tell us a bit about your PhD – where are you studying and what is the topic of your research?

Hi! I’m Aneesh, a final-year PhD student at the University of Arkansas, where I’m advised by Dr Xintao Wu. My research lies at the intersection of causal inference, representation learning, and generative modeling, with a broader focus on trustworthiness and explainability in artificial intelligence. My dissertation specifically explores two core areas: causal representation learning and counterfactual generative modeling.

Causal Representation Learning (CRL) aims to discover high-level, interpretable, causally related factors from high-dimensional data. For instance, consider a robot arm interacting with objects. Instead of treating variables like the position or shape of objects as statistically independent, CRL models their causal relationships (e.g., the robot arm’s position causally determines object movement). CRL seeks to capture such mechanisms.

Counterfactual generative modeling builds on this by enabling the generation of hypothetical scenarios through learned causal mechanisms. In a medical setting, for example, one might want to simulate how a brain MRI would look if we could change a patient’s age or brain volume independently. Such counterfactual analysis allows us to better understand and probe causality in complex systems like medical imaging.

Could you give us an overview of the research you’ve carried out so far during your PhD?

One of my projects focused on causal disentanglement, which involves recovering unique causal factors from high-dimensional data (e.g., images) with minimal supervision. I developed a unified framework, backed by theoretical guarantees, that leverages weak supervision and flexible priors to recover the causal generative mechanisms of data using variational inference. This work builds on the principle of independent causal mechanisms and contributes toward creating embodied AI systems capable of causal reasoning from visual input.

In another project, I tackled the limitations of diffusion models in representing causally meaningful structure. While diffusion models like DALL-E, Stable Diffusion, and Veo3 have revolutionized image and video generation, they typically lack interpretability. My work proposes a method for conditioning diffusion models on causally interpretable latent representations. This allows for controllable counterfactual generation, producing high-quality images that reflect specific hypothetical changes in underlying causal variables, even when such scenarios were not observed in the training data.

Is there an aspect of your research that has been particularly interesting or a project that you’re particularly proud of?

Yes, I’m particularly proud of a technical survey I authored on causal representation learning and counterfactual generative modeling. As I delved deeper into these subfields, I noticed a lack of a cohesive resource that synthesized the theoretical foundations and methodological approaches across the literature. I developed a structured survey to serve as a starting point for new researchers in the field.

Writing this survey was an incredibly rewarding experience. It helped me deepen my understanding of the field, connect disparate ideas, and identify open research problems in the area. I hope it lowers the barrier to entry for researchers interested in causal AI and contributes to building trustworthy AI systems grounded in causal reasoning. For those interested, the survey is available here.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

Currently, I’m investigating the causal reasoning capabilities of large vision-language models (LVLMs). Rather than asking what these models can do, I’m particularly interested in what they cannot do. Probing their limitations reveals where foundational advances are needed. My recent work benchmarks several open-source LVLMs on carefully designed visual causal reasoning tasks inspired by CRL. I look to further explore the causal reasoning capability of large-scale generative models through a paradigm known as mechanistic interpretability.

Looking ahead, I’m also excited about applying ideas from my research to domains such as healthcare, especially in medical imaging and personalized diagnosis. I believe that causal priors are crucial for building interpretable and robust AI systems in high-stakes domains. More broadly, I’m passionate about using causality to accelerate scientific discovery through reliable, domain-aware AI systems.

Aneesh with his poster at AAAI 2025.

How was the AAAI/SIGAI Doctoral Consortium, and the AAAI conference experience in general?

Attending AAAI for the first time was a fantastic experience. The AAAI/SIGAI Doctoral Consortium was incredibly well-organized, with engaging talks, mentorship sessions, and networking events. As I near the end of my PhD, it was particularly valuable to hear insights from junior faculty and industry researchers who have recently made that transition.

I presented a poster on my research at the consortium and had many insightful conversations. It was also inspiring to connect with fellow PhD students from around the world and learn about the diverse areas they are working on. Walking through the poster sessions and technical talks at AAAI, I found several intersections with my research that I hadn’t previously considered.

What advice would you give to someone thinking of doing a PhD in the field?

Choose your research topic carefully; it matters more than you might think. Popular areas can quickly become saturated, while niche areas may lack a strong community. Finding a balance is key. Most importantly, pursue problems that genuinely interest you and try to approach them from a new angle.

Also, it’s easy to fall into the trap of constantly measuring yourself by output. But a PhD is a unique opportunity for deep learning (no pun intended), developing your research identity, and connecting with others in your field. Don’t forget to enjoy the journey.

Could you tell us an interesting (non-AI related) fact about you?

Although I’m a vegetarian, I’m a huge foodie and I love exploring different cuisines. When I visited Philadelphia for the conference, I managed to try nine different cuisines in just one week!

About Aneesh

Aneesh Komanduri is a final-year PhD student at the University of Arkansas. His research focuses on the intersection of causality and generative modeling, specifically causal representation learning and counterfactual generative modeling. He is also particularly interested in the application of his research to accelerate scientific discovery and in critical domains such as healthcare. His research has led to publications at prestigious conferences and journals such as IJCAI, ECAI, AAAI, and TMLR. He has also served as a PC member for several top-tier conferences and journals such as IJCAI, NeurIPS, ICML, ICLR, and TMLR.



tags: , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

How the internet and its bots are sabotaging scientific research

  03 Sep 2025
What most people have failed to fully realise is that internet research has brought along risks of data corruption or impersonation.

#ICML2025 outstanding position paper: Interview with Jaeho Kim on addressing the problems with conference reviewing

  02 Sep 2025
Jaeho argues that the AI conference peer review crisis demands author feedback and reviewer rewards.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence