ΑΙhub.org
 

UK needs AI legislation to create trust so companies can ‘plug AI into British economy’ – report

by
23 October 2023



share this:

A person with their hands on a laptop keyboard is looking at something happening over their screen with a worried expression. They are white, have shoulder length dark hair and wear a green t-shirt. The overall image is illustrated in a warm, sketchy, cartoon style.  Floating in front of the person are three small green illustrations representing different industries, which is what they are looking at.  On the left is a hospital building, in the middle is a bus, and on the right is a siren with small lines coming off it to indicate that it is flashing or making noise.  Between the person and the images representing industries is a small character representing artificial intelligence made of lines and circles in green and red (like nodes and edges on a graph) who is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up.  A similar patten of nodes and edges is on the laptop screen in front of the person, as though the character has jumped out of their screen.  The overall image makes it look as though the person is worried the AI character might approach and interfere with one of the industry icons.Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries / Licenced by CC-BY 4.0

By Fred Lewsey

The British government should offer tax breaks for businesses developing AI-powered products and services, or applying AI to their existing operations, to “unlock the UK’s potential for augmented productivity”, according to a new University of Cambridge report.

Researchers argue that the UK currently lacks the computing capacity and capital required to build “generative” machine learning models fast enough to compete with US companies such as Google, Microsoft or Open AI.

Instead, they call for a UK focus on leveraging these new AI systems for real-world applications – such as developing new diagnostic products and addressing the shortage of software engineers, for example – which could provide a major boost to the British economy.

However, the researchers caution that without new legislation to ensure the UK has solid legal and ethical AI regulation, such plans could falter. British industries and the public may struggle to trust emerging AI platforms such as ChatGPT enough to invest time and money into skilling up.

The policy report is a collaboration between Cambridge’s Minderoo Centre for Technology and Democracy, Bennett Institute for Public Policy, and ai@cam: the University’s flagship initiative on artificial intelligence.

“Generative AI will change the nature of how things are produced, just as what occurred with factory assembly lines in the 1910s or globalised supply chains at the turn of the millennium,” said Dame Diane Coyle, Bennett Professor of Public Policy. “The UK can become a global leader in actually plugging these AI technologies into the economy.”

Prof Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy, said: “A new Bill that fosters confidence in AI by legislating for data protection, intellectual property and product safety is vital groundwork for using this technology to increase UK productivity.”

Generative AI uses algorithms trained on giant datasets to output original high-quality text, images, audio, or video at ferocious speed and scale. The text-based ChatGPT dominated headlines this year. Other examples include Midjourney, which can conjure imagery in any different style in seconds.

Networked grids – or clusters – of computing hardware called Graphics Processing Units (GPU) are required to handle the vast quantities of data that hone these machine-learning models. For example, ChatGPT is estimated to cost $40 million a month in computing alone. In the spring of this year, the UK chancellor announced £100 million for a “Frontier AI Taskforce” to scope out the creation of home-grown AI to rival the likes of Google Bard.

However, the report points out that the supercomputer announced by the UK chancellor is unlikely to be online until 2026, while none of the big three US tech companies – Amazon, Microsoft or Google – have GPU clusters in the UK.

“The UK has no companies big enough to invest meaningfully in foundation model development,” said report co-author Sam Gilbert. “State spending on technology is modest compared to China and the US, as we have seen in the UK chip industry.”

As such, the UK should use its strengths in fin-tech, cybersecurity and health-tech to build software – the apps, tools and interfaces – that harnesses AI for everyday use, says the report.

“Generative AI has been shown to speed up coding by some 55%, which could help with the UK’s chronic developer shortage,” said Gilbert. “In fact, this type of AI can even help non-programmers to build sophisticated software.”

Moreover, the UK has world-class research universities that could drive progress in tackling AI stumbling blocks: from the cooling of data centres to the detection of AI-generated misinformation.

At the moment, however, UK organisations lack incentives to comply with responsible AI. “The UK’s current approach to regulating generative AI is based on a set of vague and voluntary principles that nod at security and transparency,” said report co-author Dr Ann Kristin Glenster.

“The UK will only be able to realise the economic benefits of AI if the technology can be trusted, and that can only be ensured through meaningful legislation and regulation.”

Along with new AI laws, the report suggests a series of tax incentives, such as an enhanced Seed Enterprise Investment Scheme, to increase the supply of capital to AI start-ups, as well as tax credits for all businesses including generative AI in their operations. Challenge prizes could be launched to identify bottom-up uses of generative AI from within organisations.

Read the report in full

Policy Brief: GENERATIVE AI, Dr Ann Kristin Glenster & Sam Gilbert.




University of Cambridge




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association