ΑΙhub.org
 

AI UK – discussing the national AI strategy, AI Standards Hub, and data in the public eye


by
24 March 2022



share this:

AI UK 2022 logo
Hosted by the Alan Turing Institute, AI UK is a two day conference that showcases artificial intelligence and data science research, development, and policy in the UK. This year, the event took place on 22 and 23 March, and participants were treated to a variety of interesting talks, panel discussions, and conversations on a wide variety of topics.

The past year has seen much activity in the UK with regards to strategy, governance and policy. The policy-related sessions at AI UK provided the opportunity for participants to find out more about, amongst other things, the progress of AI-related legislation, regulation, the national AI strategy, the national AI Standards Hub, and how data is used at a governmental level. We take a look at three of the policy and strategy-related sessions that took place during the two days.

Breakfast with the Office for AI

This session saw Tabitha Goldstaub, Sana Khareghani and Sara El-Hanfy discuss the UK AI strategy, and the progress that has been made so far in carrying out the actions identified in the document.

The Office for AI was set up in 2018 with the goal of building a foundation to grow AI in the country. Since then, the Office has been responsible for initiating the AI Council Roadmap, and publishing a national AI strategy.

Sana said that we’ve moved to the delivery phase of the strategy, which will require collaboration between government, industry, academia. She highlighted some of the key initiatives that are underway, and how these related to the three parts of the strategy: laying long-term foundations, governance, and benefits for society.

In terms of laying the foundations, there has been investment in education, specifically post-graduate conversion courses, with scholarships available for people from under-represented groups. There is also an initiative underway to help further people’s understanding of intellectual property.

Governance of AI has been an area which has received a lot of focus. Sana noted that having the right governance in place allows AI to flourish. It’s not the case that regulation will stifle innovation. The key is to ensure that we have the right framework and guidance in place. In fact, there will be a white paper on AI governance released later this year. Governance is not just about regulation; it also covers tech standards, algorithmic transparency, audits and certification. The right governance provides users with the assurance that the tools they are using are effective, trustworthy and legal. These issues are considered in the roadmap to an effective AI assurance ecosystem, and by the AI Standards Hub.

In terms of the benefits to society, two aims are to push AI outside of London and the Southeast, and to extend the use of AI to sectors that haven’t been at the forefront, but have potential – for example farming and energy.

The session closed with the announcement of a new tool, which will launch next summer, to map the AI landscape. This tool will allow users to explore companies, funders, incubators and academic institutions working on AI. To keep up-to-date with progress, and hear when the tool launches, follow the hashtag #UKWAIfinder.

Building the AI Standards Hub

The AI Standards Hub is a new initiative dedicated to community building, knowledge sharing, and international engagement around AI standardisation.

Even though we may not give it too much thought, standards are something that permeate our lives. For example, paper sizes, digital file formats, wireless communication, and safety of electrical equipment are all subject to internationally recognised standards. Standards are generally voluntary, but can have important connections to regulation. They have a variety of purposes: they can aid interoperability and efficiency, they facilitate international trade, and they ensure quality, safety and trustworthiness. Recently, attention has turned to creating standards for AI. It is these standards that are the focus of the AI Standards Hub.

The AI Standards Hub is still at an early stage of development, and there hasn’t been any public-facing activity to date. However, the team have been working on laying the foundations, and in this session they gave us a look behind the scenes, explained a bit about the motivation for the initiative, and let the community know how they could get involved.

The initiative will comprise four activities, which will all be brought together through a website.

  1. AI standards observatory – an online database of relevant standards which users can browse.
  2. Connecting and community building – tools to bring stakeholders together.
  3. Education training and professional development
  4. Research and analysis – pursue research on issues such as finding gaps in the standards landscape.

The mission of the hub is to empower stakeholders and to take a multi-stakeholder approach. To that end, the team are keen to get the community involved. If you would like to receive updates and provide input, you can complete this form.

What can AI do for our public good? In conversation with Patrick Vallance

Sir Patrick Vallance is the UK’s Chief Scientific Adviser, and played a key role in providing information to the government during the pandemic. This interesting conversation focussed on the role of data and AI for the public good, and how critical data has been during these COVID times.

The discussion started with a look back to the start of the pandemic. Patrick said that, at that time, the UK lacked data. During the course of the pandemic, the whole data ecosystem has developed and improved markedly. In terms of data integration, we now have a range of different data sources linked to one another. Additionally, there are now far more data collection systems, and these have been set up to allow data to flow to the people who need it.

Data visualisation is something that has really come to the fore over the past two years. The way in which data is presented is critical to informing the public as to why certain decisions have been taken. Patrick also noted that ministers were calling for data on a regular basis, specifically in the form of understandable visualizations.

The lessons that have been learnt, in terms of data collection, integration, and visualisation, can also be applied to other future risks that we may face. When considering risk management and data, one needs to think about: who is collecting the data, whether there are flows to get that data to the right place, and which interoperabilities are critical.

Patrick commented that we’ve seen a real thirst for data across our society during the pandemic, with many people keen to play with the data and use it in different ways. He sees his job to make sure that desire for data continues. That can be done by embedding systems and processes across different organisations and companies and making sure users have the knowledge to use the data, and to ask the right questions of that data.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

How the internet and its bots are sabotaging scientific research

  03 Sep 2025
What most people have failed to fully realise is that internet research has brought along risks of data corruption or impersonation.

#ICML2025 outstanding position paper: Interview with Jaeho Kim on addressing the problems with conference reviewing

  02 Sep 2025
Jaeho argues that the AI conference peer review crisis demands author feedback and reviewer rewards.

Forthcoming machine learning and AI seminars: September 2025 edition

  01 Sep 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 September and 31 October 2025.
monthly digest

AIhub monthly digest: August 2025 – causality and generative modelling, responsible multimodal AI, and IJCAI in Montréal and Guangzhou

  29 Aug 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence