ΑΙhub.org
 

UK needs AI legislation to create trust so companies can ‘plug AI into British economy’ – report


by
23 October 2023



share this:

A person with their hands on a laptop keyboard is looking at something happening over their screen with a worried expression. They are white, have shoulder length dark hair and wear a green t-shirt. The overall image is illustrated in a warm, sketchy, cartoon style.  Floating in front of the person are three small green illustrations representing different industries, which is what they are looking at.  On the left is a hospital building, in the middle is a bus, and on the right is a siren with small lines coming off it to indicate that it is flashing or making noise.  Between the person and the images representing industries is a small character representing artificial intelligence made of lines and circles in green and red (like nodes and edges on a graph) who is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up.  A similar patten of nodes and edges is on the laptop screen in front of the person, as though the character has jumped out of their screen.  The overall image makes it look as though the person is worried the AI character might approach and interfere with one of the industry icons.Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries / Licenced by CC-BY 4.0

By Fred Lewsey

The British government should offer tax breaks for businesses developing AI-powered products and services, or applying AI to their existing operations, to “unlock the UK’s potential for augmented productivity”, according to a new University of Cambridge report.

Researchers argue that the UK currently lacks the computing capacity and capital required to build “generative” machine learning models fast enough to compete with US companies such as Google, Microsoft or Open AI.

Instead, they call for a UK focus on leveraging these new AI systems for real-world applications – such as developing new diagnostic products and addressing the shortage of software engineers, for example – which could provide a major boost to the British economy.

However, the researchers caution that without new legislation to ensure the UK has solid legal and ethical AI regulation, such plans could falter. British industries and the public may struggle to trust emerging AI platforms such as ChatGPT enough to invest time and money into skilling up.

The policy report is a collaboration between Cambridge’s Minderoo Centre for Technology and Democracy, Bennett Institute for Public Policy, and ai@cam: the University’s flagship initiative on artificial intelligence.

“Generative AI will change the nature of how things are produced, just as what occurred with factory assembly lines in the 1910s or globalised supply chains at the turn of the millennium,” said Dame Diane Coyle, Bennett Professor of Public Policy. “The UK can become a global leader in actually plugging these AI technologies into the economy.”

Prof Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy, said: “A new Bill that fosters confidence in AI by legislating for data protection, intellectual property and product safety is vital groundwork for using this technology to increase UK productivity.”

Generative AI uses algorithms trained on giant datasets to output original high-quality text, images, audio, or video at ferocious speed and scale. The text-based ChatGPT dominated headlines this year. Other examples include Midjourney, which can conjure imagery in any different style in seconds.

Networked grids – or clusters – of computing hardware called Graphics Processing Units (GPU) are required to handle the vast quantities of data that hone these machine-learning models. For example, ChatGPT is estimated to cost $40 million a month in computing alone. In the spring of this year, the UK chancellor announced £100 million for a “Frontier AI Taskforce” to scope out the creation of home-grown AI to rival the likes of Google Bard.

However, the report points out that the supercomputer announced by the UK chancellor is unlikely to be online until 2026, while none of the big three US tech companies – Amazon, Microsoft or Google – have GPU clusters in the UK.

“The UK has no companies big enough to invest meaningfully in foundation model development,” said report co-author Sam Gilbert. “State spending on technology is modest compared to China and the US, as we have seen in the UK chip industry.”

As such, the UK should use its strengths in fin-tech, cybersecurity and health-tech to build software – the apps, tools and interfaces – that harnesses AI for everyday use, says the report.

“Generative AI has been shown to speed up coding by some 55%, which could help with the UK’s chronic developer shortage,” said Gilbert. “In fact, this type of AI can even help non-programmers to build sophisticated software.”

Moreover, the UK has world-class research universities that could drive progress in tackling AI stumbling blocks: from the cooling of data centres to the detection of AI-generated misinformation.

At the moment, however, UK organisations lack incentives to comply with responsible AI. “The UK’s current approach to regulating generative AI is based on a set of vague and voluntary principles that nod at security and transparency,” said report co-author Dr Ann Kristin Glenster.

“The UK will only be able to realise the economic benefits of AI if the technology can be trusted, and that can only be ensured through meaningful legislation and regulation.”

Along with new AI laws, the report suggests a series of tax incentives, such as an enhanced Seed Enterprise Investment Scheme, to increase the supply of capital to AI start-ups, as well as tax credits for all businesses including generative AI in their operations. Challenge prizes could be launched to identify bottom-up uses of generative AI from within organisations.

Read the report in full

Policy Brief: GENERATIVE AI, Dr Ann Kristin Glenster & Sam Gilbert.




University of Cambridge




            AIhub is supported by:


Related posts :



Forthcoming machine learning and AI seminars: June 2025 edition

  02 Jun 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 June and 31 July 2025.
monthly digest

AIhub monthly digest: May 2025 – materials design, object state classification, and real-time monitoring for healthcare data

  30 May 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners

  29 May 2025
Find out who won the awards presented at the International Conference on Autonomous Agents and Multiagent Systems last week.

The Good Robot podcast: Transhumanist fantasies with Alexander Thomas

  28 May 2025
In this episode, Eleanor talks to Alexander Thomas, a filmmaker and academic, about the transhumanist narrative.

Congratulations to the #ICRA2025 best paper award winners

  27 May 2025
The winners and finalists in the different categories have been announced.

#ICRA2025 social media round-up

  23 May 2025
Find out what the participants got up to at the International Conference on Robotics & Automation.

Interview with Gillian Hadfield: Normative infrastructure for AI alignment

  22 May 2025
Kumar Kshitij Patel spoke to Gillian Hadfield about her interdisciplinary research, career trajectory, path into AI alignment, law, and general thoughts on AI systems.

PitcherNet helps researchers throw strikes with AI analysis

  21 May 2025
Baltimore Orioles tasks Waterloo Engineering researchers to develop AI tech that can monitor pitchers using low-resolution video captured by smartphones



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence