ΑΙhub.org
 

UK needs AI legislation to create trust so companies can ‘plug AI into British economy’ – report


by
23 October 2023



share this:

A person with their hands on a laptop keyboard is looking at something happening over their screen with a worried expression. They are white, have shoulder length dark hair and wear a green t-shirt. The overall image is illustrated in a warm, sketchy, cartoon style.  Floating in front of the person are three small green illustrations representing different industries, which is what they are looking at.  On the left is a hospital building, in the middle is a bus, and on the right is a siren with small lines coming off it to indicate that it is flashing or making noise.  Between the person and the images representing industries is a small character representing artificial intelligence made of lines and circles in green and red (like nodes and edges on a graph) who is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up.  A similar patten of nodes and edges is on the laptop screen in front of the person, as though the character has jumped out of their screen.  The overall image makes it look as though the person is worried the AI character might approach and interfere with one of the industry icons.Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries / Licenced by CC-BY 4.0

By Fred Lewsey

The British government should offer tax breaks for businesses developing AI-powered products and services, or applying AI to their existing operations, to “unlock the UK’s potential for augmented productivity”, according to a new University of Cambridge report.

Researchers argue that the UK currently lacks the computing capacity and capital required to build “generative” machine learning models fast enough to compete with US companies such as Google, Microsoft or Open AI.

Instead, they call for a UK focus on leveraging these new AI systems for real-world applications – such as developing new diagnostic products and addressing the shortage of software engineers, for example – which could provide a major boost to the British economy.

However, the researchers caution that without new legislation to ensure the UK has solid legal and ethical AI regulation, such plans could falter. British industries and the public may struggle to trust emerging AI platforms such as ChatGPT enough to invest time and money into skilling up.

The policy report is a collaboration between Cambridge’s Minderoo Centre for Technology and Democracy, Bennett Institute for Public Policy, and ai@cam: the University’s flagship initiative on artificial intelligence.

“Generative AI will change the nature of how things are produced, just as what occurred with factory assembly lines in the 1910s or globalised supply chains at the turn of the millennium,” said Dame Diane Coyle, Bennett Professor of Public Policy. “The UK can become a global leader in actually plugging these AI technologies into the economy.”

Prof Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy, said: “A new Bill that fosters confidence in AI by legislating for data protection, intellectual property and product safety is vital groundwork for using this technology to increase UK productivity.”

Generative AI uses algorithms trained on giant datasets to output original high-quality text, images, audio, or video at ferocious speed and scale. The text-based ChatGPT dominated headlines this year. Other examples include Midjourney, which can conjure imagery in any different style in seconds.

Networked grids – or clusters – of computing hardware called Graphics Processing Units (GPU) are required to handle the vast quantities of data that hone these machine-learning models. For example, ChatGPT is estimated to cost $40 million a month in computing alone. In the spring of this year, the UK chancellor announced £100 million for a “Frontier AI Taskforce” to scope out the creation of home-grown AI to rival the likes of Google Bard.

However, the report points out that the supercomputer announced by the UK chancellor is unlikely to be online until 2026, while none of the big three US tech companies – Amazon, Microsoft or Google – have GPU clusters in the UK.

“The UK has no companies big enough to invest meaningfully in foundation model development,” said report co-author Sam Gilbert. “State spending on technology is modest compared to China and the US, as we have seen in the UK chip industry.”

As such, the UK should use its strengths in fin-tech, cybersecurity and health-tech to build software – the apps, tools and interfaces – that harnesses AI for everyday use, says the report.

“Generative AI has been shown to speed up coding by some 55%, which could help with the UK’s chronic developer shortage,” said Gilbert. “In fact, this type of AI can even help non-programmers to build sophisticated software.”

Moreover, the UK has world-class research universities that could drive progress in tackling AI stumbling blocks: from the cooling of data centres to the detection of AI-generated misinformation.

At the moment, however, UK organisations lack incentives to comply with responsible AI. “The UK’s current approach to regulating generative AI is based on a set of vague and voluntary principles that nod at security and transparency,” said report co-author Dr Ann Kristin Glenster.

“The UK will only be able to realise the economic benefits of AI if the technology can be trusted, and that can only be ensured through meaningful legislation and regulation.”

Along with new AI laws, the report suggests a series of tax incentives, such as an enhanced Seed Enterprise Investment Scheme, to increase the supply of capital to AI start-ups, as well as tax credits for all businesses including generative AI in their operations. Challenge prizes could be launched to identify bottom-up uses of generative AI from within organisations.

Read the report in full

Policy Brief: GENERATIVE AI, Dr Ann Kristin Glenster & Sam Gilbert.




University of Cambridge




            AIhub is supported by:


Related posts :



Interview with Joseph Marvin Imperial: aligning generative AI with technical standards

  02 Apr 2025
Joseph tells us about his PhD research so far and his experience at the AAAI 2025 Doctoral Consortium.

Forthcoming machine learning and AI seminars: April 2025 edition

  01 Apr 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 April and 31 May 2025.

AI can be a powerful tool for scientists. But it can also fuel research misconduct

  31 Mar 2025
While AI is allowing scientists to make technological breakthroughs, there’s also a darker side to the use of AI in science: scientific misconduct is on the rise.
monthly digest

AIhub monthly digest: March 2025 – human-allied AI, differential privacy, and social media microtargeting

  28 Mar 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

AI ring tracks spelled words in American Sign Language

  27 Mar 2025
In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling.

How AI images are ‘flattening’ Indigenous cultures – creating a new form of tech colonialism

  26 Mar 2025
AI-generated stock images that claim to depict “Indigenous Australians”, don’t resemble Aboriginal and Torres Strait Islander peoples.

Interview with Lea Demelius: Researching differential privacy

  25 Mar 2025
We hear from doctoral consortium participant Lea Demelius who is investigating the trade-offs and synergies that arise between various requirements for trustworthy AI.

The Machine Ethics podcast: Careful technology with Rachel Coldicutt

This episode, Ben chats to Rachel Coldicutt about AI taxonomy, innovating for everyone not just the few, responsibilities of researchers, and more.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association