ΑΙhub.org
 

ChatGPT: what the law says about who owns the copyright of AI-generated content


by
20 April 2023



share this:
gavel and two books

By Sercan Ozcan, University of Portsmouth; Joe Sekhon, University of Portsmouth, and Oleksandra Ozcan, University of Portsmouth

The AI chatbot ChatGPT produces content that can appear to have been created by a human. There are many proposed uses for the technology, but its impressive capabilities raise important questions about ownership of the content.

UK legislation has a definition for computer-generated works. Under the Copyright, Designs and Patents Act 1988 they are “generated by computer in circumstances such that there is no human author of the work”. The law suggests content generated by an artificial intelligence (AI) can be protected by copyright. However, the original sources of answers generated by AI chatbots can be difficult to trace – and they might include copyrighted works.

The first question is whether ChatGPT should be allowed to use original content generated by third parties to generate its responses. The second is whether only humans can be credited as the authors of AI-generated content, or whether the AI itself can be regarded as an author – particularly when that output is creative.

Let’s deal with question one. The technology underpinning ChatGPT is known as a Large Language Model (LLM). In order to improve at what it does, it is exposed to large data-sets, including vast numbers of websites and books.

At the moment, the UK allows AI developers to pursue text and data mining (TDM), but only for non-commercial purposes. OpenAI’s terms of use assign to the users “all its right, title and interest in the output”.

But the company says it’s up to users to ensure the way they use that content does not violate any laws. The terms and conditions are also subject to change, so do not carry the stability and force of a legal right such as copyright.

The only solution will be to clarify laws and policies. Otherwise, every organisation will have to take legal action individually, aiming to show that they own the works used by an AI. Furthermore, if governments do not take an action then we are approaching a situation where all copyrighted materials will be used by others without the original author’s consent.

Question of ownership

Now to question two: who can claim copyright to AI-generated content. In the absence of a claim by the owner of original content used to generate an answer, it’s possible that copyright to the output from an chatbot could lie with individual users or the companies that developed the AI.

Copyright law is based around a general principle that only content created by human beings can be protected. The algorithms underpinning ChatGPT were developed at OpenAI, so the company would appear to hold copyright protection over those. But this might not extend to chatbot responses.

There is another option regarding the ownership of AI-generated content: the AI itself. UK law would currently prohibit an AI from owning copyright (or even recognising that an AI created it), as it is not a human and therefore cannot be treated as an author or owner under the Copyright, Designs and Patents Act. It is also unlikely that this position is going to change anytime soon, given the UK government’s response to the AI consultation.

Where a literary, dramatic, musical or artistic work is made by an employee in the course of their employment, their employer is the first owner of any copyright in the work – subject to any agreement to the contrary.

For now, policymakers are sticking to human creativity as the prism through which copyright is granted. However, as AI develops and is able to do more, policymakers might consider granting legal capacity to AIs themselves. This would represent a fundamental shift in how copyright law operates and a reimagining of who (or what) can be classed as an author and owner of copyright.

Such a change would have implications for business as firms integrate AI into their products and services. Microsoft recently announced that it will be embedding its product Copilot – based on ChatGPT – into the company’s software, such as Word, PowerPoint and Excel. Copilot can help users with written communication and summarise large volumes of data.

More developments like this are sure to follow, and early adopter firms have a chance to capitalise on the current situation, by using AI to increase the efficiency of their operations. Firms can often gain an advantage when they are first to introduce a product or service to a market – a situation called the “first-mover advantage”.

Future shifts

The UK government recently carried out a consultation on AI and copyright. Two conflicting views emerged. The tech sector believes the copyright to AI-generated content should belong to users, whereas the creative sector wants this content to be excluded from ownership completely. The UK government has not acted on the findings and instead recommended further consultation between the interested parties.

If copyright law shifts away from its focus on human agency in future, one could imagine a scenario where an AI is classed as the author and the developers of that AI as the owners of the output. This could create a situation where a handful of powerful AI companies wield colossal influence.

They could end up owning hundreds of thousands of copyrighted materials – songs, published materials, visuals and other digital assets. This could arguably lead to a dystopian situation where the majority of newly-created works are generated by AI and owned by businesses.

It seems logical that such knowledge should remain in the public domain. Perhaps the solution is that each person or company declares their contribution when they use AI – or that their contribution is automatically calculated by software. Accordingly, they get credit or financial benefit based on the amount of work they contributed.

AI content that is itself based on copyrighted materials remains problematic. An inability to rely on copyrighted materials could undermine the ability of the AI system to answer prompts from end users. But if the content is to be based on protected works, we would need to accept a new era of open innovation where the intellectual property rights do not matter.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:


Related posts :



Dataset reveals how Reddit communities are adapting to AI

  25 Apr 2025
Researchers at Cornell Tech have released a dataset extracted from more than 300,000 public Reddit communities.

Interview with Eden Hartman: Investigating social choice problems

  24 Apr 2025
Find out more about research presented at AAAI 2025.

The Machine Ethics podcast: Co-design with Pinar Guvenc

This episode, Ben chats to Pinar Guvenc about co-design, whether AI ready for society and society is ready for AI, what design is, co-creation with AI as a stakeholder, bias in design, small language models, and more.

Why AI can’t take over creative writing

  22 Apr 2025
A large language model tries to generate what a random person who had produced the previous text would produce.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

  17 Apr 2025
Find out how Amina is using machine learning to develop an explainable multi-output virtual metrology system.

Images of AI – between fiction and function

“The currently pervasive images of AI make us look somewhere, at the cost of somewhere else.”

Grace Wahba awarded the 2025 International Prize in Statistics

  16 Apr 2025
Her contributions laid the foundation for modern statistical techniques that power machine learning algorithms such as gradient boosting and neural networks.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association