ΑΙhub.org
 

AI Fringe – watch on demand

by
06 November 2023



share this:

The AI Fringe comprised a series of events hosted across London and the UK to complement the UK Government’s AI Safety Summit. Run completely independently from the Safety Summit, the aim of the AI Fringe was to bring together representatives from industry, civil society and academia to discuss how to develop safe and beneficial AI. It proposed to serve as a platform for all communities, including those historically underrepresented, to engage in the discussion.

The AI Fringe took place across five days, from 30 October – 3 November 2023, with the main focus on the talks and panels hosted at The British Library. Each day focused on a different theme:

  • Day 1 | Expanding the conversation: AI for everyone
  • Day 2 | Expanding the conversation: Defining AI safety in practice
  • Day 3 | Digging deeper: AI, biology, people, culture and climate
  • Day 4 | Digging deeper: Work, safety, law and democracy
  • Day 5 | Looking ahead: What next for AI?

You can watch the recorded keynotes, talks, fireside chats, and panel discussions from the five days here.

To give a flavour of the event, the panel discussion below considered “AI Safety”, exploring what the term means, what we can learn from existing safety-based governance, and how such systems secure trust through holistic management of a broad range of risks, not just the most extreme.

Find out more



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association