ΑΙhub.org
 

More than half of new articles on the internet are being written by AI


by
31 December 2025



share this:


By Francesco Agnellini, Binghamton University, State University of New York

The line between human and machine authorship is blurring, particularly as it’s become increasingly difficult to tell whether something was written by a person or AI.

Now, in what may seem like a tipping point, the digital marketing firm Graphite recently published a study showing that more than 50% of articles on the web are being generated by artificial intelligence.

As a scholar who explores how AI is built, how people are using it in their everyday lives, and how it’s affecting culture, I’ve thought a lot about what this technology can do and where it falls short.

If you’re more likely to read something written by AI than by a human on the internet, is it only a matter of time before human writing becomes obsolete? Or is this simply another technological development that humans will adapt to?

It isn’t all or nothing

Thinking about these questions reminded me of Umberto Eco’s essay “Apocalyptic and Integrated,” which was originally written in the early 1960s. Parts of it were later included in an anthology titled “Apocalypse Postponed,” which I first read as a college student in Italy.

In it, Eco draws a contrast between two attitudes toward mass media. There are the “apocalyptics” who fear cultural degradation and moral collapse. Then there are the “integrated” who champion new media technologies as a democratizing force for culture.

Back then, Eco was writing about the proliferation of TV and radio. Today, you’ll often see similar reactions to AI.

Yet Eco argued that both positions were too extreme. It isn’t helpful, he wrote, to see new media as either a dire threat or a miracle. Instead, he urged readers to look at how people and communities use these new tools, what risks and opportunities they create, and how they shape – and sometimes reinforce – power structures.

While I was teaching a course on deepfakes during the 2024 election, Eco’s lesson also came back to me. Those were days when some scholars and media outlets were regularly warning of an imminent “deepfake apocalypse.”

Would deepfakes be used to mimic major political figures and push targeted disinformation? What if, on the eve of an election, generative AI was used to mimic the voice of a candidate on a robocall telling voters to stay home?

Those fears weren’t groundless: Research shows that people aren’t especially good at identifying deepfakes. At the same time, they consistently overestimate their ability to do so.

In the end, though, the apocalypse was postponed. Post-election analyses found that deepfakes did seem to intensify some ongoing political trends, such as the erosion of trust and polarization, but there’s no evidence that they affected the final outcome of the election.

Listicles, news updates and how-to guides

Of course, the fears that AI raises for supporters of democracy are not the same as those it creates for writers and artists.

For them, the core concerns are about authorship: How can one person compete with a system trained on millions of voices that can produce text at hyper-speed? And if this becomes the norm, what will it do to creative work, both as an occupation and as a source of meaning?

It’s important to clarify what’s meant by “online content,” the phrase used in the Graphite study, which analyzed over 65,000 randomly selected articles of at least 100 words on the web. These can include anything from peer-reviewed research to promotional copy for miracle supplements.

A closer reading of the Graphite study shows that the AI-generated articles consist largely of general-interest writing: news updates, how-to guides, lifestyle posts, reviews and product explainers.

The primary economic purpose of this content is to persuade or inform, not to express originality or creativity. Put differently, AI appears to be most useful when the writing in question is low-stakes and formulaic: the weekend-in-Rome listicle, the standard cover letter, the text produced to market a business.

A whole industry of writers – mostly freelance, including many translators – has relied on precisely this kind of work, producing blog posts, how-to material, search engine optimization text and social media copy. The rapid adoption of large language models has already displaced many of the gigs that once sustained them.

Collaborating with AI

The dramatic loss of this work points toward another issue raised by the Graphite study: the question of authenticity, not only in identifying who or what produced a text, but also in understanding the value that humans attach to creative activity.

How can you distinguish a human-written article from a machine-generated one? And does that ability even matter?

Over time, that distinction is likely to grow less significant, particularly as more writing emerges from interactions between humans and AI. A writer might draft a few lines, let an AI expand them and then reshape that output into the final text.

This article is no exception. As a non-native English speaker, I often rely on AI to refine my language before sending drafts to an editor. At times the system attempts to reshape what I mean. But once its stylistic tendencies become familiar, it becomes possible to avoid them and maintain a personal tone.

Also, artificial intelligence is not entirely artificial, since it is trained on human-made material. It’s worth noting that even before AI, human writing has never been entirely human, either. Every technology, from parchment and stylus paper to the typewriter and now AI, has shaped how people write and how readers make sense of it.

Another important point: AI models are increasingly trained on datasets that include not only human writing but also AI-generated and human–AI co-produced text.

This has raised concerns about their ability to continue improving over time. Some commentators have already described a sense of disillusionment following the release of newer large models, with companies struggling to deliver on their promises.

Human voices may matter even more

But what happens when people become overly reliant on AI in their writing?

Some studies show that writers may feel more creative when they use artificial intelligence for brainstorming, yet the range of ideas often becomes narrower. This uniformity affects style as well: These systems tend to pull users toward similar patterns of wording, which reduces the differences that usually mark an individual voice. Researchers also note a shift toward Western – and especially English-speaking – norms in the writing of people from other cultures, raising concerns about a new form of AI colonialism.

In this context, texts that display originality, voice and stylistic intention are likely to become even more meaningful within the media landscape, and they may play a crucial role in training the next generations of models.

If you set aside the more apocalyptic scenarios and assume that AI will continue to advance – perhaps at a slower pace than in the recent past – it’s quite possible that thoughtful, original, human-generated writing will become even more valuable.

Put another way: The work of writers, journalists and intellectuals will not become superfluous simply because much of the web is no longer written by humans.The Conversation

Francesco Agnellini, Lecturer in Digital and Data Studies, Binghamton University, State University of New York

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:



Related posts :



monthly digest

2025 digest of digests

  30 Dec 2025
We look back through the archives of our monthly digests to pick out some highlights from the year.
monthly digest

AIhub monthly digest: December 2025 – studying bias in AI-based recruitment tools, an image dataset for ethical AI benchmarking, and end of year com

  29 Dec 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Half of UK novelists believe AI is likely to replace their work entirely

  24 Dec 2025
A new report asks literary creatives about their views on generative AI tools and LLM-authored books.

RL without TD learning

  23 Dec 2025
This post introduces a reinforcement learning algorithm based on a divide and conquer paradigm.

AIhub interview highlights 2025

  22 Dec 2025
Join us for a look back at some of the interviews we've conducted with members of the AI community.

Identifying patterns in insect scents using machine learning

  19 Dec 2025
Scientists will use machine learning to predict what types of molecules interact with insect olfactory receptors.

2025 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  18 Dec 2025
We collate our interviews with the 2025 cohort of doctoral consortium participants.

A backlash against AI imagery in ads may have begun as brands promote ‘human-made’

  17 Dec 2025
In a wave of new ads, brands like Heineken, Polaroid and Cadbury have started celebrating their work as “human-made”.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence