ΑΙhub.org
 

AI Fringe 2024 – event recordings available


by
20 June 2024



share this:

The AI Fringe returned for a second year on 5 June. The event was designed to complement the AI Seoul Summit which was co-hosted by the UK and South Korea governments.

The goals of the AI Fringe are 1) to bring together the views of industry, civil society and academia on safe and beneficial AI, 2) to provide a platform for all communities to engage in the discussion, 3) to enhance understanding of AI and its impacts so organisations can harness its benefits.

This year, the Fringe comprised a half-day event with the key elements being two panel discussions. The first addressed AI safety, and the panel reflected on progress over the last 12 months. The second discussion considered the challenges for responsible and trustworthy AI. You can watch both of these panels below:

Also available to watch is the closing address, which looks forward to the next AI Safety Summit, to be held in France.

You can find out more about the AI Fringe here

tags:



Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Forthcoming machine learning and AI seminars: December 2025 edition

  01 Dec 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 December 2025 and 31 January 2026.
monthly digest

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset

  28 Nov 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

EU proposal to delay parts of its AI Act signal a policy shift that prioritises big tech over fairness

  27 Nov 2025
The EC has proposed delaying parts of the act until 2027 following intense pressure from tech companies and the Trump administration.

Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence