ΑΙhub.org
 

US office the latest to deny patents where AI system listed as inventor


by
04 May 2020



share this:

AI patent denied
Last summer it was reported that patents had been filed in the USA and Europe listing an artificial intelligence system as the inventor. The patents in question were for a food container and a warning light and were filed by Stephen Thaler on behalf of DABUS (an AI system). Those applications have been considered, and on 22 April the US patent and trademark office (USPTO) reached the same verdict as the UK and European offices, denying the patents.

In his application Thaler asserted that the inventions were generated by DABUS (which he dubs a “creativity machine”), and that the system was not created to solve any particular problem. He claims it was, therefore, the machine, not a person, that recognised the novelty of the invention.

The USPTO ruled that applications require the inventor to be a “natural person”, and denied the patents on that basis.

In the UK case, in which the verdict was reached on 4 December 2019, the UK Intellectual Property Office (UKIPO) also found that “DABUS is not a person [\dots] so cannot be considered an inventor”. The office note that “the fundamental function of the patent system is to encourage innovation by granting time-limited monopolies in exchange for public disclosure.” The DABUS application was therefore not compatible with this statement, “as an AI machine is unlikely to be motivated to innovate by the prospect of obtaining patent protection”.

There were some interesting comments in the conclusion of the report concerning future debate that is needed on this topic:

[\dots] inventions created by AI machines are likely to become more prevalent in future and there is a legitimate question as to how or whether the patent system should handle such inventions. I have found that the present system does not cater for such inventions and it was never anticipated that it would, but times have changed and technology has moved on. It is right that this is debated more widely and that any changes to the law be considered in the context of such a debate, and not shoehorned arbitrarily into existing legislation.

You can read the full UKIPO report here.

In their ruling, on 27 January 2020, the European patent office (EPO) said:

[The patents] were refused by the EPO following oral proceedings with the applicant in November 2019, on the grounds that they do not meet the legal requirement of the European Patent Convention (EPC) that an inventor designated in the application has to be a human being, and not a machine.

In its decisions, the EPO considered that the interpretation of the legal framework of the European patent system leads to the conclusion that the inventor designated in a European patent must be a natural person. The Office further noted that the understanding of the term inventor as referring to a natural person appears to be an internationally applicable standard, and that various national courts have issued decisions to this effect.

Moreover, the designation of an inventor is mandatory as it bears a series of legal consequences, notably to ensure that the designated inventor is the legitimate one and that he or she can benefit from rights linked to this status. To exercise these rights, the inventor must have a legal personality that AI systems or machines do not enjoy.

Finally, giving a name to a machine is not sufficient to satisfy the requirements of the EPC mentioned above.

You can read the full article by the EPO detailing grounds for its decision to refuse the applications. There are also more details on the two specific patents:
Grounds for the EPO decision of 27 January 2020 on EP 18 275 163
Grounds for the EPO decision of 27 January 2020 on EP 18 275 174

The issue of AI as an inventor was discussed by our AIhub trustees earlier this year, and they also came to the conclusion that we shouldn’t (yet) be assigning patents to AI systems.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence