ΑΙhub.org
 

Why AI can’t take over creative writing


by
22 April 2025



share this:

By David Poole, University of British Columbia

In 1948, the founder of information theory, Claude Shannon, proposed modelling language in terms of the probability of the next word in a sentence given the previous words. These types of probabilistic language models were largely derided, most famously by linguist Noam Chomsky: “The notion of ‘probability of a sentence’ is an entirely useless one.”

In 2022, 74 years after Shannon’s proposal, ChatGPT appeared, which caught the attention of the public, with some even suggesting it was a gateway to super-human intelligence. Going from Shannon’s proposal to ChatGPT took so long because the amount of data and computing time used was unimaginable even a few years before.

ChatGPT is a large language model (LLM) learned from a huge corpus of text from the internet. It predicts the probability of the next word given the context: a prompt and the previously generated words.

ChatGPT uses this model to generate language by choosing the next word according to the probabilistic prediction. Think about drawing words from a hat, where the words predicted to have a higher probability have more copies in the hat. ChatGPT produces text that seems intelligent.

There is a lot of controversy about how these tools can help or hinder learning and practising creative writing. As a professor of computer science who has authored hundreds of works on artificial intelligence (AI), including AI textbooks that cover the social impact of large language models, I think understanding how the models work can help writers and educators consider the limitations and potential uses of AI for what might be called “creative” writing.

LLMs as parrots or plagiarists

It’s important to distinguish between “creativity” by the LLM and creativity by a human. For people who had low expectations of what a computer could generate, it’s been easy to assign creativity to the computer. Others were more skeptical. Cognitive scientist Douglas Hofstadter saw “a mind-boggling hollowness hidden just beneath its flashy surface.”

Linguist Emily Bender and colleagues described the language models as stochastic parrots, meaning they repeat what is in the data they were trained on with randomness. To understand this, consider why a particular word was generated. It’s because it has a relatively high probability, and it has a high probability because a lot of text in the training corpus used that word in similar contexts.

Selecting a word according to the probability distribution is like selecting text with a similar context and using its next word. Generating text from LLMs can be seen as plagiarism, one word at a time.

The creativity of a human

Consider the creativity of a human who has ideas they want to convey. With generative AI, they put their ideas into a prompt and the AI will produce text (or images or sounds). If someone doesn’t care what is generated, it doesn’t really matter what they use as a prompt. But what if they do care about what is generated?

An LLM tries to generate what a random person who had written the previous text would produce. Most creative writers do not want what a random person would write. They want to use their creativity, and may want a tool to produce what they would write if they had the time to produce it.

LLMs don’t typically have a large corpus of what a particular author has written to learn from. The author will undoubtedly want to produce something different. If the output is expected to be more detailed than the input, the LLM has to make up details. These may or may not be what the writer intended.

Some positive uses of LLMs for creative writing

Writing is like software development: Given an idea of what is wanted, software developers produce code (text in a computer language) analogously to how writers produce text in a natural language. LLMs treat writing code and writing natural language text the same way; the corpus each LLM is trained on contains both natural language and code. What’s produced depends on the context.

Writers can learn from the experience of software developers. LLMs are good for small projects that have been done previously by many other people, such as database queries or writing standard letters. They are also useful for parts of larger projects, such as a pop-up box in a graphical user interface.

If programmers want to use them for bigger projects, they need to be prepared to generate multiple outputs and edit the one that is closest to what is intended. The problem in software development has always been specifying exactly what is wanted; coding is the easy part.

Generating good prompts

How to generate good prompts has been advocated as an art form called “prompt engineering.” Proponents of prompt engineering have suggested multiple techniques that improve the output of current LLMs, such as asking for an outline and then asking for the text based on the original prompt augmented with the outline.

Another is to ask the LLM to show its reasoning steps, as in so-called chain of thought. The LLM outputs don’t just the answer a question, but explains the steps that could be taken to answer it. The LLM uses those steps as part of its prompt to get its final answer.

Such advice is bound to be ephemeral. If some prompt-engineering technique works, it will be incorporated into a future release of the LLM, so that the effect happens without the need for the explicit use of the technique. Recent models that claim to reason have incorporated such step-by-step prompts.

People want to believe

Computer scientist Joseph Weizenbaum, describing his ELIZA program written in 1964–66, said: “I was startled to see how quickly and how very deeply people conversing with (the program) became emotionally involved with the computer and how unequivocally they anthropomorphized it.” The tools have changed, but people still want to believe.

In this age of misinformation, it is important for everyone to have a way to judge the often self-serving hype.

There is no magic in generative AI, but there is lots of data from which to predict what someone could write. I hope that creativity is more than regurgitating what others have written.The Conversation

David Poole, Professor Emeritus of Computer Science, University of British Columbia

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:



Related posts :



Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

How the internet and its bots are sabotaging scientific research

  03 Sep 2025
What most people have failed to fully realise is that internet research has brought along risks of data corruption or impersonation.

#ICML2025 outstanding position paper: Interview with Jaeho Kim on addressing the problems with conference reviewing

  02 Sep 2025
Jaeho argues that the AI conference peer review crisis demands author feedback and reviewer rewards.

Forthcoming machine learning and AI seminars: September 2025 edition

  01 Sep 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 September and 31 October 2025.
monthly digest

AIhub monthly digest: August 2025 – causality and generative modelling, responsible multimodal AI, and IJCAI in Montréal and Guangzhou

  29 Aug 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Interview with Benyamin Tabarsi: Computing education and generative AI

  28 Aug 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

The value of prediction in identifying the worst-off: Interview with Unai Fischer Abaigar

  27 Aug 2025
We hear from the winner of an outstanding paper award at ICML2025.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence