ΑΙhub.org
 

AI UK – an event from The Turing Institute


by
24 March 2021



share this:

AI UK Turing event
The Turing Institute is the UK’s national institute for data science and artificial intelligence. On 23-24 March they held an event to showcase the very best AI academic content in the UK. Here, we give a flavour of the proceedings and highlight some of the interesting sessions and panel debates that took place on the first day.

As the national institute for data science and artificial intelligence, The Turing Institute brings together people from a range of disciplines, and works with universities, centres of research excellence, industry and the public sector. The Institute has three main goals: 1) to advance world-class research and apply it to real-world problems, 2) to train the leaders of the future, 3) to lead the public conversation.

During the introduction to the event, and in the following sessions, there was a strong focus on the need to create ethical and trustworthy AI, and the need to address the skills gap as AI is increasingly deployed. Both of these must be realised to win the trust and engagement of the general public.

Navigating the AI Council’s AI roadmap – a panel debate

The UK AI Council is an independent expert committee who provide advice to the government on the AI ecosystem. Earlier this year, they published the UK AI roadmap. This roadmap is an independent report providing recommendations to guide the government’s strategic direction on AI. You can read the executive summary here, and the full roadmap here.

In this panel session, the participants discussed the roadmap and, in particular, the four pillars that underpin the document, namely:

  1. Research, development and innovation
  2. Skills and diversity
  3. Data, infrastructure and public trust
  4. National, cross-sector adoption

Although ethical AI is not mentioned explicitly in these pillars, the panellists were keen to point out that, in their view, no AI should be created unless it is ethical and safe. Therefore, that should be the first consideration for anyone designing, researching, building, or deploying, and AI system.

There was a lot of discussion around the current skills gap in the UK. This exists across academia, industry and the general public. As well as skilling-up people who work with AI, there is also a pressing need to educate the general public who are affected by, and use, AI on a daily basis. One suggestion was to have conversations in individual workplaces and make that discussion specific to each particular environment. The panel highlighted the need to make this education active – rather than just telling people about AI, let them talk about it, try it out, and then listen to their suggestions.

Another common theme throughout the sessions was that we need to mix across disciplines. AI is not just about the computer scientists designing the latest algorithm; it involves everybody, and the whole of society should be represented.

Safe and ethical priorities for AI

The topic of the AI Council roadmap continued into the next session, beginning with a fireside chat between Tabitha Goldstaub and Adrian Weller. This quote from the roadmap was the foundation for some discussion points:

The UK will only feel the full benefits of AI if all parts of society have full confidence in the science and the technologies, and in the governance and regulation that enable them. That confidence will depend on the existence of systems that ensure full accountability, clear ethics and transparency. Developing the best science and the most robust applications requires commitment to an ambitious programme of investment in talent; one that promotes cutting edge skills and does so in ways that makes AI accessible in ways that are more diverse and inclusive.

Tabitha spoke about the need for all voices to be heard when it comes to AI deployment. Because the term “ethical AI” is so subjective, we need to consider diverse opinions. When writing the roadmap, the council considered four key areas when defining “safe and ethical AI”.

  1. Technically safe – AI systems should be designed to allow the user to make ethical decisions. They should be robust, private and transparent.
  2. Regulation – big corporations don’t focus on ethical AI unless they have to, so regulation is key.
  3. Communication – we need to explain what ethical AI is, and how to develop and deploy it.
  4. AI that is in the public good – should we only create AI that it in the public interest?
Regulation, skills, fairness, diversity, and stand-up

There were a number of interesting sessions throughout the day covering topics such as regulation and how to mitigate the skills gap. A panel discussed how to do better in data science, in terms of algorithmic fairness and diversity. They identified challenges and discussed potential actions.

Day one of the event closed with something a bit different; Matt Parker, a stand-up comedian and mathematician, delivered an entertaining set involving a demo of programmed Christmas lights, and talk of spreadsheet errors. There was even a little audience participation as attendees tried to debug some of his code.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



The Good Robot podcast: Lithium extraction in the Atacama with Sebastián Lehuedé

  13 Jan 2025
Eleanor and Kerry chat to Sebastián Lehuedé about data activism, the effects of lithium extraction, and the importance of reflexive research ethics.

Interview with Erica Kimei: Using ML for studying greenhouse gas emissions from livestock

  10 Jan 2025
Find out about work that brings together agriculture, environmental science, and advanced data analytics.

TELL: Explaining neural networks using logic

  09 Jan 2025
Alessio and colleagues have developed a neural network that can be directly transformed into logic.

Will AI revolutionize drug development? Researchers explain why it depends on how it’s used

  09 Jan 2025
Why we should be cautious about ambitious claims regarding AI models for new drugs.

The Machine Ethics podcast: Responsible AI strategy with Olivia Gambelin

Ben and Olivia chat about scalable AI strategy, AI ethics and responsible AI (RAI), bad innovation, values for RAI, risk and innovation mindsets, who owns the RAI strategy, and more.

AI weather models can now beat the best traditional forecasts

  06 Jan 2025
Using a diffusion model approach, a new system generates multiple forecasts to capture the complex behaviour of the atmosphere.

Human-AI collaboration in physical tasks

  03 Jan 2025
Creating an intelligent assistant that uses the sensors in a smartwatch to support physical tasks such as cooking and DIY.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association