ΑΙhub.org
 

Causal models for decision systems: an interview with Matteo Ceriscioli


by
21 April 2026



share this:

How do you go about integrating causal knowledge into decision systems or agents? We sat down with Matteo Ceriscioli to find out about his research in this space. This interview is the latest in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Could you start by telling us a bit about your PhD – where are you studying, and what’s the broad topic of your research?

I’m a second-year PhD student at Oregon State University. I’m doing a PhD in AI, specializing in causality. Specifically, I’m working on causal discovery and causal models for decision systems. The idea is to integrate causal knowledge into agents or decision systems to make them more reliable.

What was the first project that you worked on during the PhD?

We started out by studying what conditions are necessary for a system to be reliable. Reliability can mean many different things, but we focused on robustness to distribution shifts, which is widely recognized as a major challenge in AI. In many cases, researchers and practitioners assume that the environment is static, but in practice it often changes over time. Or maybe the description of the environment you initially propose might turn out to be inappropriate after deployment. These distribution shifts can often be described as interventions in causal models. In causality, an intervention is an active modification of some components of a system, and it can affect the distribution or other properties of the system. It turns out that if you are able to describe distribution shifts this way, and in many cases you can, then the ability of an agent to adapt to these shifts is equivalent to possessing causal knowledge about the system. This also means that the two can be measured in terms of each other.

Another interesting consequence is that if you have an agent that can adapt its behavior very well, then by this equivalence it must possess some causal knowledge about its environment. This suggests that it might be possible to elicit that knowledge, and that was the main idea I presented at the doctoral consortium. A large part of the causality field focuses on learning causal models, and that can often be quite difficult. Traditionally, they are learned from observational or interventional data collected from the system being studied. What we would like to do is expand this picture and make it possible to use additional sources of information. Our result suggests that adaptable agents themselves could be one of those sources, essentially opening the door to doing causal discovery by eliciting the knowledge that these agents have about their environment.

What else have you been working on?

After the initial work that I described, we worked on planning under distribution shifts. When you want to represent and reason about these shifts, it is useful to think in terms of a planning model (like a partially observable Markov decision process or POMDP). Similar to what we do in the previous work I mentioned, distribution shifts in the environment can be represented as interventions on those variables, and the agent maintains a belief not just about the state, but also about which interventions may have occurred. This allows it to update its understanding as it observes the environment and reasons about how things might have changed.

Thinking of a practical example, imagine a robot with a planning model that takes into account that the environment where it’s deployed may differ from the one used to build the model because of an unknown distribution shift. This introduces uncertainty in the state transitions and observations, which the robot needs to handle. In this causal planning framework, once deployed, the robot automatically starts figuring out what has changed compared to the base model.

There is a strong link between an agent’s ability to adapt its causal understanding of the environment and its ability to reason with those causal relationships. That’s essentially the main point of all of our work so far. And, of course, it’s a two-way process, right? You want to be able to set up a model that lets the agent reason about shifts, but you also want a capable agent that can share the knowledge it learns with you as a human user.

Have you been looking at multi-agent systems?

Yes, another interesting application we’re looking at is transfer learning between agents. Let’s say you spent a lot of time, energy, and computational resources training a specific agent. And you did a good job, the agent became very good at what it does, it’s able to adapt, and therefore learned causal knowledge about the environment. Then you have a new agent that shares, or partially shares, the same environment, but maybe has a different task. Or maybe it controls a different physical system, for example, one agent controls a rover and the other a drone. Since they share the same environment, some of the causal relationships are shared. You might want to leverage the adaptability of the first agent to simplify the training of the second. That is a form of transfer learning. You want to do this quickly and efficiently, without paying the full cost of training from scratch. Essentially, you want to extract a causal representation of the environment and pass it to the second agent as a prior or an inductive bias to guide its learning process.

What research project are you focused on at the moment and what’s going to be your goal for the rest of the year?

So, while we now know that it is theoretically possible to learn causal models from adaptable agents, we don’t yet have a scalable causal discovery algorithm that is actually usable in practice. Right now, the algorithm we have mainly serves a theoretical purpose as it was essentially part of a constructive proof. But we believe it should be possible to design a more scalable version that could actually be applied in real settings. One challenge was that we were missing the right way to formally describe the problem in a way that makes this practical. Our current view is that the agent needs to maintain some kind of belief about what has changed in the environment. Our recent planning paper explores exactly this idea. With that formalization in place, we think we now have a path toward a scalable approach. Working that out will be one of our main focuses for the rest of the year.

We are also working on a slightly different topic, a more typical causal discovery setting using observational data rather than agents. In particular, we consider the case where the data contain missing values. Missingness is a very common problem, for example, when people fill out forms, they often omit certain answers. Sometimes the reason for leaving a field blank can be inferred from the available answers in the form itself. If you try to learn a causal model from such data, you often cannot ignore the missingness because it can distort the results. This can lead to biased parameter estimates and to recovering causal relationships that do not actually exist. Often, researchers can’t avoid working with imperfect data. They still want to learn causal models, so they need methods that correct for it. We are working on a method to adapt standard causal discovery algorithms to this scenario.

I’m interested in what inspired you to move into the field and study AI, and causality specifically.

I’ve always found AI really exciting. I love the idea of trying to understand intelligence and then creating a machine that can mimic it. But in terms of studying causality specifically, I was lucky, because it’s not a topic you can learn about at every university. There aren’t many courses on it. Sometimes you might find a professor interested in the subject who includes it in a machine learning course, and that’s what happened to me during my master’s in Germany. That’s where my interest in causality began. Later, I did an internship at Vodafone working on causality projects, which helped me learn even more. That’s when I realized pursuing a PhD on the topic would be a good idea for me. And that’s why I came to Oregon State, because we have a great causality professor here.

What project were you working on at Vodafone?

I was working on causal models for customer churn. The goal was to design a model that could help us understand why customers were leaving the company and switching to another operator. Predicting when churn might happen is relatively easier, but understanding the reason why is much harder. Also, as I mentioned before, causal models allow you to reason about interventions. In this case, the idea was to use the same model to prevent churn by taking the right actions, for example, offering special promotions or reaching out proactively to understand if customers were experiencing problems. A causal model helps you reason about which interventions might actually make a difference and reduce the likelihood of customers leaving.

It must have been interesting to do something in industry before starting a PhD on the topic. I imagine it gave you a good grounding in how the models might be used.

Yeah, it was very helpful. When doing research, I often find myself thinking about what the practitioner would want, what they are looking for, and what their problems are. Even though my stint at Vodafone was short (I stayed there nine months), it was a very helpful introduction, giving me an idea of what professionals in these companies care about and what they look for.

How did you find the doctoral consortium experience?

The doctoral consortium was great, the organizers did a wonderful job. They invited some excellent speakers, and all the talks and the panel were really targeted for us early career researchers. I think it’s a good idea to organize this kind of event inside the conference. You can meet other PhD students like you and check out what they’re doing. Of course, there were all different kinds of topics, but we all shared some of the same problems in some way. We were all trying to figure out the best direction for our theses, and talking to others who were going through the same process was really helpful.

What do you like doing outside of the PhD?

I really like to play the piano. I never went to music school or anything similar, it has always been just for fun. It’s relaxing and I like playing music. I was in Japan for one year and joined the school orchestra where I played the baritone saxophone.

About Matteo

Matteo Ceriscioli is a second-year PhD student in Artificial Intelligence at Oregon State University, advised by Prof. Karthika Mohan. His research focuses on causal reasoning for decision-making to improve the robustness, reliability, and safety of AI systems. Previously, he conducted causal inference and causal discovery research at the RIKEN Center for Advanced Intelligence Project in Japan and at Vodafone Germany. Matteo has served on the Program Committee of the Adaptive and Learning Agents Workshop at AAMAS-26 and the Causal Neurosymbolic AI Workshop at ESWC-26. He has also reviewed for Behaviormetrika, the journal of the Behaviormetric Society of Japan, and volunteered at NeurIPS and AAAI. He is an active member of AAAI and ACM SIGAI.



tags: , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

A model for defect identification in materials

  20 Apr 2026
A new model measures defects that can be leveraged to improve materials’ mechanical strength, heat transfer, and energy-conversion efficiency.

‘Probably’ doesn’t mean the same thing to your AI as it does to you

  17 Apr 2026
Are you sure you and the AI chatbot you’re using are on the same page about probabilities?

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.

2026 AI Index Report released

  15 Apr 2026
Find out what the ninth edition of the report, which was published on 13 April, says about trends in AI.

Formal verification for safety evaluation of autonomous vehicles: an interview with Abdelrahman Sayed Sayed

  14 Apr 2026
Find out more about work at the intersection of continuous AI models, formal methods, and autonomous systems.

Water flow in prairie watersheds is increasingly unpredictable — but AI could help

  13 Apr 2026
In recent years, the Prairies have seen bigger swings in climate conditions — very wet years followed by very dry ones.

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence