ΑΙhub.org
 

Interview with Luc De Raedt: talking probabilistic logic, neurosymbolic AI, and explainability


by
23 September 2025



share this:

Should AI continue to be driven by a single paradigm, or does real progress lie in combining the strengths and weaknesses of many? Professor Luc De Raedt of KU Leuven has spent much of his career persistently addressing this question. Through pioneering work that bridges logic, probability, and machine learning, he has helped shape the field of neurosymbolic AI. In our conversation at IJCAI 2025 in Montreal, he spoke about what continues to fascinate him in this line of research, how he responds to criticisms of neurosymbolic AI, and why reconciling multiple paradigms is such an exciting challenge.

Liliane-Caroline Demers: Hello Professor De Raedt, thank you very much for joining me. Could you start by telling me when was your first IJCAI and what is a core memory related to it?

Luc De Raedt: My first IJCAI was in 1989 in Detroit. It was my first time to the US as well. That was quite an experience. It was also a big conference. The sizes have gone up and down and this was one of the bigger ones, and afterwards it went down a little and now it’s up again.  Reflecting the winters and summers of AI, I guess.

Liliane-Caroline: Since you were program chair for IJCAI 2022, do you have any advice you could give to the program chair of IJCAI 2026?

Luc: IJCAI 2022 was one of the first in-person conferences again. People really enjoyed it, and it was a great location. But, as here, not everybody could show up, because travel was still difficult for some.

One of the real challenges as program chair is keeping the acceptance rates balanced between the different areas. I also saw this when I was chairing ICML in 2005: the year before, in some subfields, the acceptance rate went up to 65-70 %, while in others it was much lower. That was because these were newer areas in machine learning, and people accepted more, while in other more established areas, the standards were stricter.

You still see this today. For some communities, IJCAI is the main conference, where they send their very best work. For others, larger specialized conferences dominate. So, IJCAI gets a different mix of submissions and striking the right balance is really challenging.

We made a lot of analyses. We looked at the average review scores, which differ between fields. Some communities score lower than others. In computer vision, for example, people seem more relaxed about publishing, while in machine learning, PR or CP, the reviews are stricter. You really have to monitor this carefully, and I try to approach it scientifically.

Liliane-Caroline: What first drew you to the combination of logic and learning together, and why does it continue to fascinate you?

Luc: Okay, so what drew me to this? I was in a logic group, but I wanted to do machine learning. And so the combination was quite natural and very appealing because I had also done my Master’s on learning.

When I had just finished my PhD, the topic really started to take off, and this idea of combining learning and reasoning felt very natural. I think it’s still one of the main open questions today.

Later on, we added probability. At the time, a lot of people were trying to do synthesized programs, and I became a bit pessimistic—I thought this would never work the way we were doing it. And now you see: LLMs can actually synthesize programs very well though not perfect yet. We could never have dreamed of that. I never believed it would happen in my lifetime, so it’s quite a surprise.

So, yeah, I got in touch with probability, and that was interesting. We developed probabilistic logics, also this probabilisitic Prolog called ProbLog. Then, we had a visitor working on neural networks and their combination with logic, and I thought, yeah, maybe we can also do something. We started discussing it, and that was really cool.

It’s always about adding something new—new for me, in a sense. And working on new topics—that’s exciting, to broaden my perspective.

There are different schools in machine learning—Pedro Domingos describes five schools, and this quest for the “master algorithm”. And I think ultimately, I’m also fascinated by the idea of combining the best of these worlds. Logic is too limited, probability too, neural networks as well. So, by combining them, hopefully we get their strengths. Of course, you also inherit their weaknesses, and that makes it challenging. But it’s exciting.

And what’s also extremely nice to see is that now the field is really taking off. You know, being on the Gartner hype cycle, with a lot of interest and people—everywhere seeing neurosymbolic AI as the next wave. That’s cool.

Luc De Raedt and Liliane-Caroline Demers at IJCAI 2025.

Liliane-Caroline: Yes, you did mention in your talk this morning that neurosymbolic AI is likely to reach the “top of its hype curve” in the next two to five years. I was wondering what you think will drive that hype, and what risks do you see if expectations rise too fast? And why?

Luc: People use the term in many different meanings, and neurosymbolic is also quite broad. You can interpret it broadly or more narrowly. I usually go for the narrow interpretation, which I find is still the most challenging.

And, yeah, the risk is, like with all hypes, that the expectations become too high and that this is viewed as a universal solution. Then, people will get disappointed of course, because the models have to be fit for the task that they address.

Liliane-Caroline: Earlier, you mentioned that combining these paradigms means inheriting both their strengths and weaknesses. How do you respond to the criticism that neurosymbolic AI risks losing both the efficiency of neural networks and the rigor of symbolic reasoning?

Luc: I think that’s indeed a risk. I think the solution to that is to build specific solutions for specific cases. In general, if you look at the full combination of probability, logic and neural networks, neurosymbolic AI has high computational costs. Probabilistic inference itself is hard, so you cannot really avoid that. But if you build more specific systems—like the one that my student presented this morning about a neurosymbolic automaton—then you can build something that’s really efficient.

Another important factor is engineering. Neural networks only became popular after people devoted a lot of time and effort in getting to run them on GPUs, for instance, and building tools like TensorFlow. That part is also important.

And for these complex combinations, it usually takes a while before they become feasible, and before people in academia or elsewhere have enough people and resources to really achieve it.

Liliane-Caroline: Another challenge that often arises is explainability. Could you expand on the stance that neurosymbolic AI might be uniquely positioned to deliver explainability in way deep learning can’t?

Luc: So, I guess symbolic methods are by nature very explainable because if you prove something, then the proof is, in a sense, the explanation. And if you combine these logics with neural networks, then, yeah, you can also use the proof or the kind of logical explanation as a part of the explanation.

Of course, that doesn’t allow me to look deep inside the neural network, but, still, you have some kind of higher-level concepts that you can use to find explanations. For example, graphical models offer certain types of explanations that can be exploited within neurosymbolic approaches—like finding the most likely parse tree, the most likely proof, MAP inference, all these kind of things.

One thing that I like about neurosymbolic AI is that you can use these constraints on your learning process and then you know the constraints are going to be satisfied and that already gives you some trust.

It’s not a full explanation, but at least you know that your model will obey certain regularities. Without this kind of constraint-based learning, there’s just very little gain.

Liliane-Caroline: And, to conclude, what would be one challenging question you would like to invite the AI community to reflect on?

Luc: For me, the most attractive question is still how to combine different learning paradigms. Instead of relying on one learning paradigm, we should really explore the combination of the possibilities.

The way that it should work is that, if you combine different things, you should still be able to recover the originals as special cases. That’s always what I’ve been arguing for. So, if you combine neural networks and logic, then you should have neural networks as a special case, but also logic as a special case. What we often see though is that, in these combinations, typically one of the two paradigms gets lost—everything becomes neural, or everything becomes logical.

Building these deep interfaces between these different paradigms is what I aim for and what I think is interesting. Take LLMs. If you could use them to build reliable proof systems, to do real logical reasoning with guarantees, that would be cool, right? Then, you would have the power of the LLM together with the power of logic. But that’s not happening yet.
                                                                             *
Professor De Raedt added that while AGI dominates headlines, and is often hyped as imminent, he doesn’t believe it will be solved in the next few years. But not all his students agree, which sparks debates in the lab. “Oh yes,” he said, “it’s fun.”

About Luc De Raedt

Luc De Raedt is Director of Leuven.AI, the KU Leuven Institute for AI, full professor at KU Leuven, and guest professor at Örebro University (Sweden) in the Wallenberg AI, Autonomous Systems and Software Program. He is working on the integration of machine learning and machine reasoning techniques, also known under the term neurosymbolic AI. He has chaired the main European and International Machine Learning and Artificial Intelligence conferences (IJCAI, ECAI, ICML and ECMLPKDD) and is a fellow of EurAI, AAAI and ELLIS. He received ERC Advanced Grants in 2015 and 2023.



tags: ,


Liliane-Caroline Demers




            AIhub is supported by:



Related posts :



Call for AAAI educational AI videos

  22 Sep 2025
Submit your contributions by 30 November 2025.

Self-supervised learning for soccer ball detection and beyond: interview with winners of the RoboCup 2025 best paper award

  19 Sep 2025
Method for improving ball detection can also be applied in other fields, such as precision farming.

How AI is opening the playbook on sports analytics

  18 Sep 2025
Waterloo researchers create simulated soccer datasets to unlock insights once reserved for pro teams.

Discrete flow matching framework for graph generation

and   17 Sep 2025
Read about work presented at ICML 2025 that disentangles sampling from training.

We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it

  16 Sep 2025
A single individual using AI can produce multiple papers that appear valid in a matter of hours.

Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence