ΑΙhub.org
 

A backlash against AI imagery in ads may have begun as brands promote ‘human-made’


by
17 December 2025



share this:

This image shows an individual with orange hair interacting with a large, abstract digital mirrored structure. The structure is composed of squares in varying shades of green, orange, white, and black which are pieced together to reflect the individual’s figure. The figure's hand is extended as if pointing to or interacting with the mirrored structure. Behind the  structure are streams of binary code (0s and 1s) in orange, flowing towards the digital grid.Yutong Liu & Kingston School of Art / Talking to AI 2.0 / Licenced by CC-BY 4.0

By Paul Harrison, Deakin University

In a wave of new ads, brands like Heineken, Polaroid and Cadbury have started hating on artificial intelligence (AI), celebrating their work as “human-made”.

But in these advertising campaigns on TV, billboards on New York streets and on social media, the companies are signalling something larger.

Even Apple’s new series release, Pluribus, includes the phrase “Made by Humans” in the closing credits.

Other brands including H&M and Guess have faced a backlash for using AI brand ambassadors instead of humans.

These gestures suggest we have reached a cultural moment in the evolution of this technology, where people are unsure what creativity means when machines can now produce much of what we see, hear and perhaps even be moved by.

This feels like efficiency – for executives

At a surface level, AI offers efficiencies such as faster production, cheaper visuals, instant personalisation, and automated decisions. Government and business have rushed toward it, drawn by promises of productivity and innovation. And there is no doubt that this promise is deeply seductive. Indeed, efficiency is what AI excels at.

In the context of marketing and advertising, this “promise”, at least at face value, seems to translate to smaller marketing budgets, better targeting, automated decisions (including by chatbots) and rapid deployment of ad campaigns.

For executives, this is exciting and feels like real progress, with cheaper, faster and more measurable brand campaigns.

But advertising has never really just been about efficiency. It has always relied on a degree of emotional truth and creative mystery. That psychological anchor – a belief that human intention sits behind what we are looking at – turns out to matter more than we like to admit.

Turns out, people care about authenticity

Indeed, people often value objects more when they believe those objects carry traces of a person’s intention or history. This is the case even when those images don’t differ in any material way from a computer-generated image.

To some degree, this signals consumers are sensitive to the presence of a human creator, because when visually compelling computer-generated images are labelled as machine-made, people tend to rate them less favourably.

Indeed, when the same paintings are randomly labelled as either “human created” or “AI created”, people consistently judge the works they believe to be “human created” as more beautiful, meaningful and profound.

It seems the simple presence of an AI label reduces the perceived creativity and value.

A betrayal of creativity

However, there is an important caveat here. These studies rely on people being told who made the work. The effect is a result of attribution, not perception. And so this limitation points towards a deeper problem.

If evaluations change purely because people believe a work was machine made, the response is not about quality, it is about meaning. It reflects a belief that creativity is tied to intention, effort and expression. These are qualities an algorithm doesn’t possess, even when it creates something visually persuasive. In other words, the label carries emotional weight.

The unnatural looking social media post from the Queensland Symphony Orchestra
The unnatural looking social media post from the Queensland Symphony Orchestra upset fans.Queensland Symphony Orchestra on Facebook

There are, of course, obvious examples of when AI goes comedically wrong. In early 2024, the Queensland Symphony Orchestra promoted its brand using a very strange AI-generated image most people instantly recognised as unnatural. Part of the backlash, along with the unsettling weirdness of the image, was the perception an arts organisation was betraying human creativity.

But as AI systems improve, people often struggle to distinguish synthetic from real. Indeed, AI generated faces are judged by many to be just as real, and sometimes more trustworthy, than actual photographs.

Research shows people overestimate their ability to detect deepfakes, and often mistake deepfake videos as authentic.

Although we can see emerging patterns here, the empirical research in this area is being outpaced by AI’s evolving capabilities. So we are often trying to understand psychological responses to a technology that has already evolved since the research took place.

As AI becomes more sophisticated, the boundary between human and machine-made creativity will become harder to perceive. Commerce may not be particularly troubled by this. If the output performs well, the question of origin become secondary.

Why we value creativity

But creative work has never been only about generating content. It is a way for people to express emotion, experience, memory, dissent and interpretation.

And perhaps this is why the rise of “Made by Humans” actually matters. Marketers are not simply selling provenance, they are responding to a deeper cultural anxiety about authorship in a moment when the boundaries of creativity are becoming harder to perceive.

Indeed, one could argue there is an ironic tension here. Marketing is one of the professions most exposed to being superseded by the same technology marketers are now trying to differentiate themselves from.

So whether these human-made claims are a commercial tactic or a sincere defence of creative intention, there is significantly more at stake than just another way to drive sales.The Conversation

Paul Harrison, Director, Master of Business Administration Program (MBA); Co-Director, Better Consumption Lab, Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:



Related posts :



What are small language models and how do they differ from large ones?

  06 Jan 2026
Let’s explore what makes SLMs and LLMs different – and how to choose the right one for your situation.

Forthcoming machine learning and AI seminars: January 2026 edition

  05 Jan 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 January and 28 February 2026.

AAAI presidential panel – AI perception versus reality video discussion

  02 Jan 2026
Watch the second panel discussion in this series from AAAI.

More than half of new articles on the internet are being written by AI

  31 Dec 2025
The line between human and machine authorship is blurring, particularly as it’s become increasingly difficult to tell whether something was written by a person or AI.
monthly digest

2025 digest of digests

  30 Dec 2025
We look back through the archives of our monthly digests to pick out some highlights from the year.
monthly digest

AIhub monthly digest: December 2025 – studying bias in AI-based recruitment tools, an image dataset for ethical AI benchmarking, and end of year com

  29 Dec 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Half of UK novelists believe AI is likely to replace their work entirely

  24 Dec 2025
A new report asks literary creatives about their views on generative AI tools and LLM-authored books.

RL without TD learning

  23 Dec 2025
This post introduces a reinforcement learning algorithm based on a divide and conquer paradigm.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence