ΑΙhub.org
 

AI UK – discussing the national AI strategy, AI Standards Hub, and data in the public eye

by
24 March 2022



share this:

AI UK 2022 logo
Hosted by the Alan Turing Institute, AI UK is a two day conference that showcases artificial intelligence and data science research, development, and policy in the UK. This year, the event took place on 22 and 23 March, and participants were treated to a variety of interesting talks, panel discussions, and conversations on a wide variety of topics.

The past year has seen much activity in the UK with regards to strategy, governance and policy. The policy-related sessions at AI UK provided the opportunity for participants to find out more about, amongst other things, the progress of AI-related legislation, regulation, the national AI strategy, the national AI Standards Hub, and how data is used at a governmental level. We take a look at three of the policy and strategy-related sessions that took place during the two days.

Breakfast with the Office for AI

This session saw Tabitha Goldstaub, Sana Khareghani and Sara El-Hanfy discuss the UK AI strategy, and the progress that has been made so far in carrying out the actions identified in the document.

The Office for AI was set up in 2018 with the goal of building a foundation to grow AI in the country. Since then, the Office has been responsible for initiating the AI Council Roadmap, and publishing a national AI strategy.

Sana said that we’ve moved to the delivery phase of the strategy, which will require collaboration between government, industry, academia. She highlighted some of the key initiatives that are underway, and how these related to the three parts of the strategy: laying long-term foundations, governance, and benefits for society.

In terms of laying the foundations, there has been investment in education, specifically post-graduate conversion courses, with scholarships available for people from under-represented groups. There is also an initiative underway to help further people’s understanding of intellectual property.

Governance of AI has been an area which has received a lot of focus. Sana noted that having the right governance in place allows AI to flourish. It’s not the case that regulation will stifle innovation. The key is to ensure that we have the right framework and guidance in place. In fact, there will be a white paper on AI governance released later this year. Governance is not just about regulation; it also covers tech standards, algorithmic transparency, audits and certification. The right governance provides users with the assurance that the tools they are using are effective, trustworthy and legal. These issues are considered in the roadmap to an effective AI assurance ecosystem, and by the AI Standards Hub.

In terms of the benefits to society, two aims are to push AI outside of London and the Southeast, and to extend the use of AI to sectors that haven’t been at the forefront, but have potential – for example farming and energy.

The session closed with the announcement of a new tool, which will launch next summer, to map the AI landscape. This tool will allow users to explore companies, funders, incubators and academic institutions working on AI. To keep up-to-date with progress, and hear when the tool launches, follow the hashtag #UKWAIfinder.

Building the AI Standards Hub

The AI Standards Hub is a new initiative dedicated to community building, knowledge sharing, and international engagement around AI standardisation.

Even though we may not give it too much thought, standards are something that permeate our lives. For example, paper sizes, digital file formats, wireless communication, and safety of electrical equipment are all subject to internationally recognised standards. Standards are generally voluntary, but can have important connections to regulation. They have a variety of purposes: they can aid interoperability and efficiency, they facilitate international trade, and they ensure quality, safety and trustworthiness. Recently, attention has turned to creating standards for AI. It is these standards that are the focus of the AI Standards Hub.

The AI Standards Hub is still at an early stage of development, and there hasn’t been any public-facing activity to date. However, the team have been working on laying the foundations, and in this session they gave us a look behind the scenes, explained a bit about the motivation for the initiative, and let the community know how they could get involved.

The initiative will comprise four activities, which will all be brought together through a website.

  1. AI standards observatory – an online database of relevant standards which users can browse.
  2. Connecting and community building – tools to bring stakeholders together.
  3. Education training and professional development
  4. Research and analysis – pursue research on issues such as finding gaps in the standards landscape.

The mission of the hub is to empower stakeholders and to take a multi-stakeholder approach. To that end, the team are keen to get the community involved. If you would like to receive updates and provide input, you can complete this form.

What can AI do for our public good? In conversation with Patrick Vallance

Sir Patrick Vallance is the UK’s Chief Scientific Adviser, and played a key role in providing information to the government during the pandemic. This interesting conversation focussed on the role of data and AI for the public good, and how critical data has been during these COVID times.

The discussion started with a look back to the start of the pandemic. Patrick said that, at that time, the UK lacked data. During the course of the pandemic, the whole data ecosystem has developed and improved markedly. In terms of data integration, we now have a range of different data sources linked to one another. Additionally, there are now far more data collection systems, and these have been set up to allow data to flow to the people who need it.

Data visualisation is something that has really come to the fore over the past two years. The way in which data is presented is critical to informing the public as to why certain decisions have been taken. Patrick also noted that ministers were calling for data on a regular basis, specifically in the form of understandable visualizations.

The lessons that have been learnt, in terms of data collection, integration, and visualisation, can also be applied to other future risks that we may face. When considering risk management and data, one needs to think about: who is collecting the data, whether there are flows to get that data to the right place, and which interoperabilities are critical.

Patrick commented that we’ve seen a real thirst for data across our society during the pandemic, with many people keen to play with the data and use it in different ways. He sees his job to make sure that desire for data continues. That can be done by embedding systems and processes across different organisations and companies and making sure users have the knowledge to use the data, and to ask the right questions of that data.




Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Should I use offline RL or imitation learning?

In this blog post, we aim to understand if, when and why offline RL is a better approach for tackling a variety of sequential decision-making problems.
17 May 2022, by

Watch the sessions from AI UK

The recordings of the sessions from the AI UK conference are now available for all to watch.
16 May 2022, by

Launch of a new standard for AI security in Singapore

The standard aims to guide AI practitioners in dealing with malicious attacks on AI systems.

Using deep learning to predict physical interactions of protein complexes

A computational tool developed to predict the structure of protein complexes is providing new insights into the biomolecular mechanisms of their function.
podcast

New voices in AI: human-AI collaboration, with Nicolo' Brandizzi

We talk to Nicolo' Brandizzi about his work on human-AI collaboration.
11 May 2022, by

ACM SIGAI Industry Award 2022 nominations

Find out how you can make a nomination for the ACM SIGAI Industry Award - deadline 31 May 2022.
10 May 2022, by





©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association