ΑΙhub.org
 

AI UK – an event from The Turing Institute

by
24 March 2021



share this:

AI UK Turing event
The Turing Institute is the UK’s national institute for data science and artificial intelligence. On 23-24 March they held an event to showcase the very best AI academic content in the UK. Here, we give a flavour of the proceedings and highlight some of the interesting sessions and panel debates that took place on the first day.

As the national institute for data science and artificial intelligence, The Turing Institute brings together people from a range of disciplines, and works with universities, centres of research excellence, industry and the public sector. The Institute has three main goals: 1) to advance world-class research and apply it to real-world problems, 2) to train the leaders of the future, 3) to lead the public conversation.

During the introduction to the event, and in the following sessions, there was a strong focus on the need to create ethical and trustworthy AI, and the need to address the skills gap as AI is increasingly deployed. Both of these must be realised to win the trust and engagement of the general public.

Navigating the AI Council’s AI roadmap – a panel debate

The UK AI Council is an independent expert committee who provide advice to the government on the AI ecosystem. Earlier this year, they published the UK AI roadmap. This roadmap is an independent report providing recommendations to guide the government’s strategic direction on AI. You can read the executive summary here, and the full roadmap here.

In this panel session, the participants discussed the roadmap and, in particular, the four pillars that underpin the document, namely:

  1. Research, development and innovation
  2. Skills and diversity
  3. Data, infrastructure and public trust
  4. National, cross-sector adoption

Although ethical AI is not mentioned explicitly in these pillars, the panellists were keen to point out that, in their view, no AI should be created unless it is ethical and safe. Therefore, that should be the first consideration for anyone designing, researching, building, or deploying, and AI system.

There was a lot of discussion around the current skills gap in the UK. This exists across academia, industry and the general public. As well as skilling-up people who work with AI, there is also a pressing need to educate the general public who are affected by, and use, AI on a daily basis. One suggestion was to have conversations in individual workplaces and make that discussion specific to each particular environment. The panel highlighted the need to make this education active – rather than just telling people about AI, let them talk about it, try it out, and then listen to their suggestions.

Another common theme throughout the sessions was that we need to mix across disciplines. AI is not just about the computer scientists designing the latest algorithm; it involves everybody, and the whole of society should be represented.

Safe and ethical priorities for AI

The topic of the AI Council roadmap continued into the next session, beginning with a fireside chat between Tabitha Goldstaub and Adrian Weller. This quote from the roadmap was the foundation for some discussion points:

The UK will only feel the full benefits of AI if all parts of society have full confidence in the science and the technologies, and in the governance and regulation that enable them. That confidence will depend on the existence of systems that ensure full accountability, clear ethics and transparency. Developing the best science and the most robust applications requires commitment to an ambitious programme of investment in talent; one that promotes cutting edge skills and does so in ways that makes AI accessible in ways that are more diverse and inclusive.

Tabitha spoke about the need for all voices to be heard when it comes to AI deployment. Because the term “ethical AI” is so subjective, we need to consider diverse opinions. When writing the roadmap, the council considered four key areas when defining “safe and ethical AI”.

  1. Technically safe – AI systems should be designed to allow the user to make ethical decisions. They should be robust, private and transparent.
  2. Regulation – big corporations don’t focus on ethical AI unless they have to, so regulation is key.
  3. Communication – we need to explain what ethical AI is, and how to develop and deploy it.
  4. AI that is in the public good – should we only create AI that it in the public interest?
Regulation, skills, fairness, diversity, and stand-up

There were a number of interesting sessions throughout the day covering topics such as regulation and how to mitigate the skills gap. A panel discussed how to do better in data science, in terms of algorithmic fairness and diversity. They identified challenges and discussed potential actions.

Day one of the event closed with something a bit different; Matt Parker, a stand-up comedian and mathematician, delivered an entertaining set involving a demo of programmed Christmas lights, and talk of spreadsheet errors. There was even a little audience participation as attendees tried to debug some of his code.




Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



DataLike: Interview with Motunrayo Kilanko

Ndane and Isabella talk to Motunrayo Kilanko about learning on the job, projects, and apprenticeships.

Interview with Salena Torres Ashton: causality and natural language

We spoke to Salena about her research, the AAAI experience, and her career path from professional genealogist and historian to machine learning PhD student.
02 May 2024, by

5 questions schools and universities should ask before they purchase AI tech products

Every few years, an emerging technology shows up at the doorstep of schools and universities promising to transform education.
01 May 2024, by

AIhub monthly digest: April 2024 – explainable AI, access to compute, and noughts and crosses

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
30 April 2024, by

The Machine Ethics podcast: Good tech with Eleanor Drage and Kerry McInerney

In this episode, Ben chats Eleanor Drage and Kerry McInerney about good tech.
29 April 2024, by

AIhub coffee corner: Open vs closed science

The AIhub coffee corner captures the musings of AI experts over a short conversation.
26 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association