ΑΙhub.org
 

Dataset reveals how Reddit communities are adapting to AI


by
25 April 2025



share this:

A watercolour illustration in two strong colours showing the silhouettes of four people, two of whom have dogs on leads. They all cast shadows, and vary between realistic representations and those formed by representations of algorithms, data points or networks. The people and their data become indistinguishable form each other.Jamillah Knowles / Data People / Licenced by CC-BY 4.0

By Grace Stanley

Researchers at Cornell Tech have released a dataset extracted from more than 300,000 public Reddit communities, and a report detailing how Reddit communities are changing their policies to address a surge in AI-generated content.

The team collected metadata and community rules from the online communities, known as subreddits, during two periods in July 2023 and November 2024. The researchers will present a paper with their findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems being held April 26 to May 1 in Yokohama, Japan.

One of the researchers’ most striking discoveries is the rapid increase in subreddits with rules governing AI use. According to the research, the number of subreddits with AI rules more than doubled in 16 months, from July 2023 to November 2024.

“This is important because it demonstrates that AI concern is spreading in these communities. It raises the question of whether or not the communities have the tools they need to effectively and equitably enforce these policies,” said Travis Lloyd, a doctoral student at Cornell Tech and one of the researchers who initiated the project in 2023.

The study found that AI rules are most common in subreddits focused on art and celebrity topics. These communities often share visual content, and their rules frequently address concerns about the quality and authenticity of AI-generated images, audio and video. Larger subreddits were also significantly more likely to have these rules, reflecting growing concerns about AI among communities with larger user bases.

“This paper uses community rules to provide a first view of how our online communities are contending with the potential widespread disruption that is brought by generative AI,” said co-author Mor Naaman, professor at the Jacobs Technion-Cornell Institute at Cornell Tech, and of information science in the Cornell Ann S. Bowers College of Computing and Information Science. “Looking at actions of moderators and rule changes gave us a unique way to reflect on how different subreddits are impacted and are resisting, or not, the use of AI in their communities.”

As generative AI evolves, the researchers urge platform designers to prioritize the community concerns about quality and authenticity exposed in the data. The study also highlights the importance of “context-sensitive” platform design choices, which consider how different types of communities take varied approaches to regulating AI use.

For example, the research suggests that larger communities may be more inclined to use formal, explicit rules to maintain content quality and govern AI use. In contrast, closer-knit, more personal communities may rely on informal methods, such as social norms and expectations.

“The most successful platforms will be those that empower communities to develop and enforce their own context-sensitive norms about AI use. The most important thing is that platforms do not take a top-down approach that forces a single AI policy on all communities,” Lloyd said. “Communities need to be able to choose for themselves whether they want to allow the new technology, and platform designers should explore new moderation tools that can help communities detect the use of AI.”

By making their dataset public, the researchers aim to enable future studies that can further explore online community self-governance and the impact of AI on online interactions.




Cornell University

            AUAI is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Half of AI health answers are wrong even though they sound convincing – new study

  12 May 2026
Imagine you have just been diagnosed with early-stage cancer and, before your next appointment, you type a question into an AI chatbot.

Gradient-based planning for world models at longer horizons

  11 May 2026
What were the problems that motivated this project and what was the approach to address them?

It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea

  08 May 2026
Increased offloading to new tools has raised the fear that people will become overly reliant on AI.

Making AI systems more transparent and trustworthy: an interview with Ximing Wen

  07 May 2026
Find out more about Ximing's work, experience as a research intern, and what inspired her to study AI.

Report on foundation model impacts released

  06 May 2026
Partnership on AI publish a progress report on post-deployment governance practices.

Forthcoming machine learning and AI seminars: May 2026 edition

  05 May 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 May and 30 June 2026.

AI for Science – from cosmology to chemistry

  01 May 2026
How AI is transforming science, from a day conference at the Royal Society
monthly digest

AIhub monthly digest: April 2026 – machine learning for particle physics, AI Index Report, and table tennis

  30 Apr 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.



AUAI is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence