ΑΙhub.org
 

How will generative artificial intelligence affect political advertising in 2024?


by
22 March 2024



share this:

Illinois advertising professor Michelle Nelson says voters should expect to see a lot more generative AI in political ads during the 2024 election cycle, warning that it might be difficult to impossible to tell what’s real and what’s fake. Photo credit: L. Brian Stauffer

By Lois Yoksoulian

It’s estimated that $12 billion will be spent on political ads this [USA] election cycle – 30% more than in 2020. The sheer volume of ads is remarkable, and there is vast potential to use this political information to contribute to democracy: to reach more potential voters and provide accurate information. There’s also more potential than ever for generative artificial intelligence to misrepresent candidates and policies, leading to confusion in the voting booth. News Bureau editor Lois Yoksoulian spoke with advertising professor and department head Michelle Nelson about the topic.

How is AI used in political ads, and what are some recent examples of AI-generated political ads?

After President Joe Biden announced his re-election bid, the Republican National Committee released an AI-generated political advertisement that depicted a dystopian world should Biden be re-elected. The video ad features a number of “what if” questions (e.g., What if crime worsens?) with AI-generated images. This is a classic fear appeal-negative attack ad. Emotional ads reach us, and we remember them – like the famous Daisy political ad used during the 1964 election cycle or the Willie Horton ad used in 1988. But they may also get some backlash for being too negative, too manipulative, especially because they are using AI.

Some say the Joe Biden RNC ad is the first AI-generated ad, but it’s hard to prove. However, in the top left corner of the ad, you see a disclosure in a small font: “Built entirely with AI imagery.” The video description on the GOP’s YouTube page says, “An AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024.” So they are not hiding the use of AI. There’s also a spoof ad that ran on The Daily Show, which uses an AI-generated voiceover for Biden. It’s pretty easy these days to use AI tools to create content and make it look pretty real.

Are the entities behind these types of ads required to disclose that AI has been used?

The Federal Election Commission, the governmental body that regulates political communication, including advertising in the U.S., sought public comments on amending a regulation related to the use of deliberately deceptive AI in campaign ads at their meeting titled 2023: REG 2023-02 Artificial Intelligence in Campaign Ads. They have not yet made a ruling here. However, states such as California, Texas, Michigan, Washington and Minnesota have enacted new state laws where any deepfake AI images, video or audio need to be labeled. A similar law is being discussed here in Illinois. It’s not clear how these state laws relate to the Constitution.

Some tech companies have started requiring disclosures. For example, Google indicated that all AI-generated political ads shown on YouTube or other Google platforms “that alter people or events” must have a prominent disclosure. There is also a verification process for political ads in some regions.

Microsoft has started their “Content Credentials as a Service” which is like a watermarking tool – it lets users digitally sign and authenticate the media – whether it’s an image or video through their Coalition for Content Provenance and Authenticity. The idea is that the watermark serves as metadata so users can see the history of the content. For example, “how, when, and by whom the content was created or edited,” whether by a person or AI. This information stays with the content so anyone can see the origin.

To what extent AI-generated political content is now being watermarked or disclosed I have no idea. It seems difficult to police all of these images at this moment in time.

What is the potential harm of using AI in political ads?

The concerns for harm relate to possibilities for deepfakes and disinformation. Others can re-create something using AI voice or images without authorization to create stories and content that are absolutely untrue.

We’ve had tools and technology in the past, like Photoshop to manipulate existing images. Do you remember the 2004 ads featuring altered photos depicting John Kerry, the Democratic nominee for president, and Jane Fonda at an anti-Vietnam War rally? Ads are regularly embellished or altered: The colors or lighting are changed to make people look ominous or more youthful. But this is different. This is a whole new level. Anyone can make AI messages using free tools such as Open AI’s ChatGPT or Google’s Bard. This may be good when used responsibly, or it may be harmful.

Our research shows that many people don’t know that the First Amendment protects political advertising in the U.S. Political advertising is not held up to the same standards as commercial advertising, say for a cheeseburger. The Truth-in-Advertising laws by the Federal Trade Commission do not apply to political ads. That means political ads can legally be deceptive or lie. Of course, there are libel laws, but they are rarely used in this arena.

Beyond this regulatory difference in commercial and political advertising, the idea is that we want free-flowing political information to contribute to a dialogue that can enhance our democracy. Too much regulation can have a chilling effect on political speech. Yet the potential today with AI to lie and deceive and to deploy these messages in a personalized manner and at a grand scale offers a challenge perhaps to our thinking about regulation and political advertising.

Do you have any advice on how consumers can spot the use of generative AI in political ads?

They can’t. You can look for irregularities like the number of legs on the cat. But it is getting harder and harder to tell if AI makes an image or video. Whether it’s Taylor Swift, an image of Trump in jail or a funny meme, we can’t always tell, especially as we scroll through our social media feeds. The best we can do is hope that the tech industry develops systems able to trace the origin of the content so we can see who made the image and when and how. If this tech is built in, it’s harder to fool people. We could also encourage disclosure mandates among tech companies.

But in the age of this kind of media and in an election year, we need to focus on the basics of media literacy – looking for reliable sources of information, watching the political debates and trying to form our own informed opinions.




University of Illinois




            AIhub is supported by:


Related posts :



coffee corner

AIhub coffee corner: Agentic AI

  15 Aug 2025
The AIhub coffee corner captures the musings of AI experts over a short conversation.

New research could block AI models learning from your online content

  14 Aug 2025
The method protects images from being used to train AI or create deepfakes by adding invisible changes that confuse the technology.

What’s coming up at #IJCAI2025?

  13 Aug 2025
Find out what's on the programme at the forthcoming International Joint Conference on Artificial Intelligence.

Interview with Flávia Carvalhido: Responsible multimodal AI

  12 Aug 2025
We hear from PhD student Flávia about her research, what inspired her to study AI, and her experience at AAAI 2025.

Using AI to speed up landslide detection

  11 Aug 2025
Researchers are using AI to speed up landslide detection following major earthquakes and extreme rainfall events.

IJCAI in Canada: 90-second pitches from the next generation of AI researchers

  08 Aug 2025
Find out about some of the interesting research taking place across Canada.

AI for the ancient world: how a new machine learning system can help make sense of Latin inscriptions

  08 Aug 2025
System retrieves textual and contextual parallels, makes use of visual details, and can generate speculative text to fill gaps in inscriptions.

Smart microscope captures aggregation of misfolded proteins

  07 Aug 2025
EPFL researchers have developed a microscope that can predict the onset of misfolded protein aggregation.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence