ΑΙhub.org
 

AI UK – an event from The Turing Institute


by
24 March 2021



share this:

AI UK Turing event
The Turing Institute is the UK’s national institute for data science and artificial intelligence. On 23-24 March they held an event to showcase the very best AI academic content in the UK. Here, we give a flavour of the proceedings and highlight some of the interesting sessions and panel debates that took place on the first day.

As the national institute for data science and artificial intelligence, The Turing Institute brings together people from a range of disciplines, and works with universities, centres of research excellence, industry and the public sector. The Institute has three main goals: 1) to advance world-class research and apply it to real-world problems, 2) to train the leaders of the future, 3) to lead the public conversation.

During the introduction to the event, and in the following sessions, there was a strong focus on the need to create ethical and trustworthy AI, and the need to address the skills gap as AI is increasingly deployed. Both of these must be realised to win the trust and engagement of the general public.

Navigating the AI Council’s AI roadmap – a panel debate

The UK AI Council is an independent expert committee who provide advice to the government on the AI ecosystem. Earlier this year, they published the UK AI roadmap. This roadmap is an independent report providing recommendations to guide the government’s strategic direction on AI. You can read the executive summary here, and the full roadmap here.

In this panel session, the participants discussed the roadmap and, in particular, the four pillars that underpin the document, namely:

  1. Research, development and innovation
  2. Skills and diversity
  3. Data, infrastructure and public trust
  4. National, cross-sector adoption

Although ethical AI is not mentioned explicitly in these pillars, the panellists were keen to point out that, in their view, no AI should be created unless it is ethical and safe. Therefore, that should be the first consideration for anyone designing, researching, building, or deploying, and AI system.

There was a lot of discussion around the current skills gap in the UK. This exists across academia, industry and the general public. As well as skilling-up people who work with AI, there is also a pressing need to educate the general public who are affected by, and use, AI on a daily basis. One suggestion was to have conversations in individual workplaces and make that discussion specific to each particular environment. The panel highlighted the need to make this education active – rather than just telling people about AI, let them talk about it, try it out, and then listen to their suggestions.

Another common theme throughout the sessions was that we need to mix across disciplines. AI is not just about the computer scientists designing the latest algorithm; it involves everybody, and the whole of society should be represented.

Safe and ethical priorities for AI

The topic of the AI Council roadmap continued into the next session, beginning with a fireside chat between Tabitha Goldstaub and Adrian Weller. This quote from the roadmap was the foundation for some discussion points:

The UK will only feel the full benefits of AI if all parts of society have full confidence in the science and the technologies, and in the governance and regulation that enable them. That confidence will depend on the existence of systems that ensure full accountability, clear ethics and transparency. Developing the best science and the most robust applications requires commitment to an ambitious programme of investment in talent; one that promotes cutting edge skills and does so in ways that makes AI accessible in ways that are more diverse and inclusive.

Tabitha spoke about the need for all voices to be heard when it comes to AI deployment. Because the term “ethical AI” is so subjective, we need to consider diverse opinions. When writing the roadmap, the council considered four key areas when defining “safe and ethical AI”.

  1. Technically safe – AI systems should be designed to allow the user to make ethical decisions. They should be robust, private and transparent.
  2. Regulation – big corporations don’t focus on ethical AI unless they have to, so regulation is key.
  3. Communication – we need to explain what ethical AI is, and how to develop and deploy it.
  4. AI that is in the public good – should we only create AI that it in the public interest?
Regulation, skills, fairness, diversity, and stand-up

There were a number of interesting sessions throughout the day covering topics such as regulation and how to mitigate the skills gap. A panel discussed how to do better in data science, in terms of algorithmic fairness and diversity. They identified challenges and discussed potential actions.

Day one of the event closed with something a bit different; Matt Parker, a stand-up comedian and mathematician, delivered an entertaining set involving a demo of programmed Christmas lights, and talk of spreadsheet errors. There was even a little audience participation as attendees tried to debug some of his code.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Dataset reveals how Reddit communities are adapting to AI

  25 Apr 2025
Researchers at Cornell Tech have released a dataset extracted from more than 300,000 public Reddit communities.

Interview with Eden Hartman: Investigating social choice problems

  24 Apr 2025
Find out more about research presented at AAAI 2025.

The Machine Ethics podcast: Co-design with Pinar Guvenc

This episode, Ben chats to Pinar Guvenc about co-design, whether AI ready for society and society is ready for AI, what design is, co-creation with AI as a stakeholder, bias in design, small language models, and more.

Why AI can’t take over creative writing

  22 Apr 2025
A large language model tries to generate what a random person who had produced the previous text would produce.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

  17 Apr 2025
Find out how Amina is using machine learning to develop an explainable multi-output virtual metrology system.

Images of AI – between fiction and function

“The currently pervasive images of AI make us look somewhere, at the cost of somewhere else.”

Grace Wahba awarded the 2025 International Prize in Statistics

  16 Apr 2025
Her contributions laid the foundation for modern statistical techniques that power machine learning algorithms such as gradient boosting and neural networks.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association