ΑΙhub.org
 

AI Fringe – watch on demand


by
06 November 2023



share this:

The AI Fringe comprised a series of events hosted across London and the UK to complement the UK Government’s AI Safety Summit. Run completely independently from the Safety Summit, the aim of the AI Fringe was to bring together representatives from industry, civil society and academia to discuss how to develop safe and beneficial AI. It proposed to serve as a platform for all communities, including those historically underrepresented, to engage in the discussion.

The AI Fringe took place across five days, from 30 October – 3 November 2023, with the main focus on the talks and panels hosted at The British Library. Each day focused on a different theme:

  • Day 1 | Expanding the conversation: AI for everyone
  • Day 2 | Expanding the conversation: Defining AI safety in practice
  • Day 3 | Digging deeper: AI, biology, people, culture and climate
  • Day 4 | Digging deeper: Work, safety, law and democracy
  • Day 5 | Looking ahead: What next for AI?

You can watch the recorded keynotes, talks, fireside chats, and panel discussions from the five days here.

To give a flavour of the event, the panel discussion below considered “AI Safety”, exploring what the term means, what we can learn from existing safety-based governance, and how such systems secure trust through holistic management of a broad range of risks, not just the most extreme.

Find out more



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence