ΑΙhub.org
 

We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it


by
16 September 2025



share this:

A red-toned illustration shows a man's head surrounded by swirling AI icons, with small, mischievous witch-like figures flying around him. The man's expression appears disoriented and fatigued, symbolizing the mental overload caused by the overwhelming flood of AI tools and news. The witches represent the chaotic, cackling nature of rapid AI developments, adding to the sense of dizziness and confusion.Nadia Piet & Archival Images of AI + AIxDESIGN / AI Am Over It / Licenced by CC-BY 4.0

By David Comerford, University of Stirling

Back in the 2000s, the American pharmaceutical firm Wyeth was sued by thousands of women who had developed breast cancer after taking its hormone replacement drugs. Court filings revealed the role of “dozens of ghostwritten reviews and commentaries published in medical journals and supplements being used to promote unproven benefits and downplay harms” related to the drugs.

Wyeth, which was taken over by Pfizer in 2009, had paid a medical communications firm to produce these articles, which were published under the bylines of leading doctors in the field (with their consent). Any medical professionals reading these articles and relying on them for prescription advice would have had no idea that Wyeth was behind them.

The pharmaceutical company insisted that everything written was scientifically accurate and – shockingly – that paying ghostwriters for such services was common in the industry. Pfizer ended up paying out more than US$1 billion (£744 million) in damages over the harms from the drugs.

The articles in question are an excellent example of “resmearch” – bullshit science in the service of corporate interests. While the overwhelming majority of researchers are motivated to uncover the truth and check their findings robustly, resmearch is unconcerned with truth – it seeks only to persuade.

We’ve seen numerous other examples in recent years, such as soft drinks companies and meat producers funding studies that are less likely than independent research to show links between their products and health risks.

A major current worry is that AI tools reduce the costs of producing such evidence to virtually zero. Just a few years ago it took months to produce a single paper. Now a single individual using AI can produce multiple papers that appear valid in a matter of hours.

Already the public health literature is observing a slew of papers that draw on data optimised for use with an AI to report single-factor results. Single-factor results link a single factor to some health outcome, such as finding a link between eating eggs and developing dementia.

These studies lend themselves to specious results. When datasets span thousands of people and hundreds of pieces of information about them, researchers will inevitably find misleading correlations that occur by chance.

A search of leading academic databases Scopus and Pubmed showed that an average of four single-factor studies were published per year between 2014 and 2021. In the first ten months of 2024 alone, a whopping 190 were published.

These weren’t necessarily motivated by corporate interests – some could, for example, be the result of academics looking to publish more material to boost their career prospects. The point is more that with AI facilitating these kinds of studies, they become an added temptation for businesses looking to promote products.

Incidentally, the UK has just given some businesses an additional motivation for producing this material. New government guidance asks baby-food producers to make marketing claims that suggest health benefits only if supported by scientific evidence.

While well-intentioned, it will incentivise firms to find results that their products are healthy. This could increase their demand for the sort of AI-assisted “scientific evidence” that is ever more available.

Fixing the problem

One issue is that research does not always go through peer review prior to informing policy. In 2021, for example, US Supreme Court justice Samuel Alito, in an opinion on the right to carry a gun, cited a briefing paper by a Georgetown academic that presented survey data on gun use.

The academic and gun survey were funded by the Constitutional Defence Fund, which the New York Times describes as a “pro-gun nonprofit”.

Since the survey data are not publicly available and the academic has refused to answer questions about this, it is impossible to know whether his results are resmearch. Still, lawyers have referenced his paper in cases across the US to defend gun interests.

One obvious lesson is that anyone relying on research should be wary of any that has not passed peer review. A less obvious lesson is that we will need to reform peer review as well. There has been much discussion in recent years about the explosion in published research and the extent to which reviewers do their jobs properly.

Over the past decade or so, several groups of researchers have made meaningful progress in identifying procedures that reduce the risk of specious findings in published papers. Advances include getting authors to publish a research plan before doing any work (known as preregistration), then transparently reporting all the research steps taken in a study, and making sure reviewers check this is in order.

Also, for single-factor papers, there’s a recent method called a specification curve analysis that comprehensively tests the robustness of the claimed relationship against alternative ways of slicing the data.

Journal editors in many fields have adopted these proposals, and updated their rules in other ways too. They often now require authors to publish their data, their code and the survey or materials used in experiments (such as questionnaires, stimuli and so on). Authors also have to disclose conflicts of interest and funding sources.

Some journals have gone further, such as requiring, in response to the finding about the use of AI-optimised datasets, authors to cite all other secondary analyses similar to theirs that have been published and to disclose how AI was used in their work.

Some fields have definitely been more reformist than others. Psychology journals have, in my experience, gone further to adopt these processes than have economics journals.

For instance, a recent study applied additional robustness checks to analyses published in the top-tier American Economic Review. This suggested that studies published in the journal systematically overstated the strength of evidence contained within the data.

In general, the current system seems ill-equipped to cope with the deluge of papers that AI will precipitate. Reviewers need to invest time, effort and scrupulous attention checking preregistrations, specification curve analyses, data, code and so on.

This requires a peer-review mechanism that rewards reviewers for the quality of their reviews.

Public trust in science remains high worldwide. That is good for society because the scientific method is an impartial judge that promotes what is true and meaningful over what is popular or profitable.

Yet AI threatens to take us further from that ideal than ever. If science is to maintain its credibility, we urgently need to incentivise meaningful peer review.The Conversation

David Comerford, Professor of Economics and Behavioural Science, University of Stirling

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:



Related posts :



Winners of the #ECAI2025 outstanding paper awards announced

  28 Oct 2025
Find out which articless were selected as ECAI and PAIS outstanding papers.

The great wildebeest migration, seen from space: satellites and AI are helping count Africa’s wildlife

  27 Oct 2025
Researchers analysed satellite imagery of the Serengeti-Mara ecosystem from 2022 and 2023.

New AI tool helps match enzymes to substrates

  24 Oct 2025
A new machine learning-powered tool can help researchers determine how well an enzyme fits with a desired target.

#AIES2025 social media round-up

  24 Oct 2025
Find out what participants got up to at the Conference on Artificial Intelligence, Ethics, and Society.

Looking ahead to #ECAI2025

  23 Oct 2025
Find out what the programme has in store at the European Conference on AI.

Congratulations to the #AIES2025 best paper award winners!

  21 Oct 2025
The four winners of best paper prizes were announced during the opening ceremony at AIES.

From the telegraph to AI, our communications systems have always had hidden environmental costs

  20 Oct 2025
Drawing parallels between new technologies of the past and today.

What’s on the programme at #AIES2025?

  17 Oct 2025
The conference on AI, ethics, and society will take place in Madrid from 20-22 October.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence