ΑΙhub.org
 

All questions answered: ChatGPT and large language models


by
10 March 2023



share this:
a pile of question marks

On 8 March, the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE) All Questions Answered (AQuA) series continued, with a one hour session focussing on ChatGPT and large language models. The session was conducted in conjunction with ICT-48 Networks of Excellence TAILOR, AI4Media and ICT-48 Coordination and Support Action VISION.

The panel first gave an overview of large language models (LLMs) and how they work. They then went on to consider topics including: regulation, safeguards, whether European companies are investing enough in large language models, the limitations of LLMs, and how concerned we should be about the misinformation they generate.

The session was recorded and you can watch in full below:

The panel for this event:

  • Morten Irgens (CLAIRE, ADRA, Kristiania)
  • Fredrik Heintz (TAILOR, Linköping University)
  • Tomas Mikolov (RICAIP, CIIRC CTU)
  • Ioannis Pitas (AI4Media, AIDA)
  • Ariel Ekgren (AI Sweden)

About the CLAIRE AQuA sessions:

CLAIRE All Questions Answered Events (AQuAs) are relaxed, one hour, online events that bring together a small group of panellists to discuss current hot topics in AI and beyond and answer questions from the community. These events are usually held via Zoom Webinar for CLAIRE members and members of co-hosting organisations, and live streamed to the CLAIRE YouTube channel, allowing the community at large to get involved and be a part of the discussion.



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.

The Machine Ethics podcast: moral agents with Jen Semler

In this episode, Ben and Jen Semler talk about what makes a moral agent, the point of moral agents, philosopher and engineer collaborations, and more.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence