ΑΙhub.org
 

All questions answered: ChatGPT and large language models


by
10 March 2023



share this:
a pile of question marks

On 8 March, the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE) All Questions Answered (AQuA) series continued, with a one hour session focussing on ChatGPT and large language models. The session was conducted in conjunction with ICT-48 Networks of Excellence TAILOR, AI4Media and ICT-48 Coordination and Support Action VISION.

The panel first gave an overview of large language models (LLMs) and how they work. They then went on to consider topics including: regulation, safeguards, whether European companies are investing enough in large language models, the limitations of LLMs, and how concerned we should be about the misinformation they generate.

The session was recorded and you can watch in full below:

The panel for this event:

  • Morten Irgens (CLAIRE, ADRA, Kristiania)
  • Fredrik Heintz (TAILOR, Linköping University)
  • Tomas Mikolov (RICAIP, CIIRC CTU)
  • Ioannis Pitas (AI4Media, AIDA)
  • Ariel Ekgren (AI Sweden)

About the CLAIRE AQuA sessions:

CLAIRE All Questions Answered Events (AQuAs) are relaxed, one hour, online events that bring together a small group of panellists to discuss current hot topics in AI and beyond and answer questions from the community. These events are usually held via Zoom Webinar for CLAIRE members and members of co-hosting organisations, and live streamed to the CLAIRE YouTube channel, allowing the community at large to get involved and be a part of the discussion.



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.

Learning from failure to tackle extremely hard problems

  12 Nov 2025
This blog post is based on the work "BaNEL: Exploration posteriors for generative modeling using only negative rewards".



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence