ΑΙhub.org
 

#AAAI2024 workshops round-up 2: AI for credible elections, and are large language models simply causal parrots?


by
12 March 2024



share this:

A crowd of people outside a packed roomA packed room for the workshop “Are Large Language Models Simply Causal Parrots?” Photo credit: Emily McMilin.

In this second round-up of the workshops at AAAI 2024, we hear from the organisers of the workshops on:

  • Are Large Language Models Simply Causal Parrots?
  • AI for Credible Elections: A Call To Action with Trusted AI

Are Large Language Models Simply Causal Parrots?

Organisers: Matej Zečević, Amit Sharma, Lianhui Qin, Devendra Singh Dhami, Alex Molak, Kristian Kersting.

The aim of this workshop was to bring together researchers interested in identifying to what extent we could consider the output and internal workings of large language models (LLMs) to be causal.

Workshop organisers Matej Zečević, Alex Molak and Devendra Singh Dhami. Image credit: Alex Molak.

  • Speakers presented various perspectives on large language models (LLMs) in the context of causality and symbolic reasoning. Emre Kıcıman (Microsoft Research) emphasized that LLMs can be useful in the applied causal process, even if they don’t have fully generalizable causal capabilities.
  • Andrew Lampinen (Google DeepMind) shared the insights from his work, suggesting that LLMs can learn generalizable causal strategies under certain circumstances, but these circumstances are likely not met for the existing models. Guy van den Broeck (UCLA) presented his work on constraining and conditioning LLM generation using hidden Markov models (HMMs).
  • Judea Pearl shared his thoughts on the possibility of LLMs learning a partial implicit world model. He concluded his inspiring talk with a call for new “meta-science” based on lingual and/or statistical integration of conventional sciences. During the open stage workshop summary, participants shared their thoughts and conclusions. The voices were diverse: from strong conviction that LLMs are in fact “causal parrots” regurgitating statistical associations to more careful considerations that it might be too early for us to answer this question.

Emre Kıcıman giving his invited talk “A New Frontier at the Intersection of Causality and LLMs”. Photo credit: Alex Molak.

By Alex Molak


AI for Credible Elections: A Call To Action with Trusted AI

Organisers: Biplav Srivastava, Anita Nikolich, Andrea Hickerson, Chris Dawes, Tarmo Koppel, Sachindra Joshi, Ponnaguram Kumaraguru.

A panel discussion in action. Photo credit: Stanley Simoes.

This workshop examined the challenges of credible elections globally in an academic setting with apolitical discussion of significant issues. The three main takeaways from the event were:

  • AI will impact elections in the coming year(s), but not all problems around elections and democracy are due to AI. A multi-pronged solution is needed: process, people, technology.
  • Information disorders are a key concern with elections but need not be a deal-breaker. AI can specifically help elections by disseminating official information personalized to a voter’s cognitive needs at scale, in their language and format.
  • More focus is needed in developing data sources, information system stack, testing and funding for AI and elections. We can continue the discussion on the Google group – Credible Elections with AI Lead Technologies. A longer blog summarizing the workshop is here.

Photo credit: Biplav Srivastava.

By Biplav Srivastava




tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

‘Probably’ doesn’t mean the same thing to your AI as it does to you

  17 Apr 2026
Are you sure you and the AI chatbot you’re using are on the same page about probabilities?

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.

2026 AI Index Report released

  15 Apr 2026
Find out what the ninth edition of the report, which was published on 13 April, says about trends in AI.

Formal verification for safety evaluation of autonomous vehicles: an interview with Abdelrahman Sayed Sayed

  14 Apr 2026
Find out more about work at the intersection of continuous AI models, formal methods, and autonomous systems.

Water flow in prairie watersheds is increasingly unpredictable — but AI could help

  13 Apr 2026
In recent years, the Prairies have seen bigger swings in climate conditions — very wet years followed by very dry ones.

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

  08 Apr 2026
Francesco tells us how LLMs behave in the social network Moltbook, and what this reveals about network dynamics.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence