ΑΙhub.org
 

AI UK – an event from The Turing Institute


by
24 March 2021



share this:

AI UK Turing event
The Turing Institute is the UK’s national institute for data science and artificial intelligence. On 23-24 March they held an event to showcase the very best AI academic content in the UK. Here, we give a flavour of the proceedings and highlight some of the interesting sessions and panel debates that took place on the first day.

As the national institute for data science and artificial intelligence, The Turing Institute brings together people from a range of disciplines, and works with universities, centres of research excellence, industry and the public sector. The Institute has three main goals: 1) to advance world-class research and apply it to real-world problems, 2) to train the leaders of the future, 3) to lead the public conversation.

During the introduction to the event, and in the following sessions, there was a strong focus on the need to create ethical and trustworthy AI, and the need to address the skills gap as AI is increasingly deployed. Both of these must be realised to win the trust and engagement of the general public.

Navigating the AI Council’s AI roadmap – a panel debate

The UK AI Council is an independent expert committee who provide advice to the government on the AI ecosystem. Earlier this year, they published the UK AI roadmap. This roadmap is an independent report providing recommendations to guide the government’s strategic direction on AI. You can read the executive summary here, and the full roadmap here.

In this panel session, the participants discussed the roadmap and, in particular, the four pillars that underpin the document, namely:

  1. Research, development and innovation
  2. Skills and diversity
  3. Data, infrastructure and public trust
  4. National, cross-sector adoption

Although ethical AI is not mentioned explicitly in these pillars, the panellists were keen to point out that, in their view, no AI should be created unless it is ethical and safe. Therefore, that should be the first consideration for anyone designing, researching, building, or deploying, and AI system.

There was a lot of discussion around the current skills gap in the UK. This exists across academia, industry and the general public. As well as skilling-up people who work with AI, there is also a pressing need to educate the general public who are affected by, and use, AI on a daily basis. One suggestion was to have conversations in individual workplaces and make that discussion specific to each particular environment. The panel highlighted the need to make this education active – rather than just telling people about AI, let them talk about it, try it out, and then listen to their suggestions.

Another common theme throughout the sessions was that we need to mix across disciplines. AI is not just about the computer scientists designing the latest algorithm; it involves everybody, and the whole of society should be represented.

Safe and ethical priorities for AI

The topic of the AI Council roadmap continued into the next session, beginning with a fireside chat between Tabitha Goldstaub and Adrian Weller. This quote from the roadmap was the foundation for some discussion points:

The UK will only feel the full benefits of AI if all parts of society have full confidence in the science and the technologies, and in the governance and regulation that enable them. That confidence will depend on the existence of systems that ensure full accountability, clear ethics and transparency. Developing the best science and the most robust applications requires commitment to an ambitious programme of investment in talent; one that promotes cutting edge skills and does so in ways that makes AI accessible in ways that are more diverse and inclusive.

Tabitha spoke about the need for all voices to be heard when it comes to AI deployment. Because the term “ethical AI” is so subjective, we need to consider diverse opinions. When writing the roadmap, the council considered four key areas when defining “safe and ethical AI”.

  1. Technically safe – AI systems should be designed to allow the user to make ethical decisions. They should be robust, private and transparent.
  2. Regulation – big corporations don’t focus on ethical AI unless they have to, so regulation is key.
  3. Communication – we need to explain what ethical AI is, and how to develop and deploy it.
  4. AI that is in the public good – should we only create AI that it in the public interest?
Regulation, skills, fairness, diversity, and stand-up

There were a number of interesting sessions throughout the day covering topics such as regulation and how to mitigate the skills gap. A panel discussed how to do better in data science, in terms of algorithmic fairness and diversity. They identified challenges and discussed potential actions.

Day one of the event closed with something a bit different; Matt Parker, a stand-up comedian and mathematician, delivered an entertaining set involving a demo of programmed Christmas lights, and talk of spreadsheet errors. There was even a little audience participation as attendees tried to debug some of his code.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

  08 Apr 2026
Francesco tells us how LLMs behave in the social network Moltbook, and what this reveals about network dynamics.

Scaling up multi-agent systems: an interview with Minghong Geng

  07 Apr 2026
We sat down with Minghong in the latest of our interviews with the 2026 AAAI/SIGAI Doctoral Consortium participants.

Forthcoming machine learning and AI seminars: April 2026 edition

  02 Apr 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 April and 31 May 2026.

#AAAI2026 invited talk: machine learning for particle physics

  01 Apr 2026
How is ML used in the search for new particles at CERN?
monthly digest

AIhub monthly digest: March 2026 – time series, multiplicity, and the history of RoboCup

  31 Mar 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

  30 Mar 2026
We launch our new series with a conversation with Ross King - a pioneer in the field of AI-enabled scientific discovery.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence