ΑΙhub.org
 

Dataset reveals how Reddit communities are adapting to AI


by
25 April 2025



share this:

A watercolour illustration in two strong colours showing the silhouettes of four people, two of whom have dogs on leads. They all cast shadows, and vary between realistic representations and those formed by representations of algorithms, data points or networks. The people and their data become indistinguishable form each other.Jamillah Knowles / Data People / Licenced by CC-BY 4.0

By Grace Stanley

Researchers at Cornell Tech have released a dataset extracted from more than 300,000 public Reddit communities, and a report detailing how Reddit communities are changing their policies to address a surge in AI-generated content.

The team collected metadata and community rules from the online communities, known as subreddits, during two periods in July 2023 and November 2024. The researchers will present a paper with their findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems being held April 26 to May 1 in Yokohama, Japan.

One of the researchers’ most striking discoveries is the rapid increase in subreddits with rules governing AI use. According to the research, the number of subreddits with AI rules more than doubled in 16 months, from July 2023 to November 2024.

“This is important because it demonstrates that AI concern is spreading in these communities. It raises the question of whether or not the communities have the tools they need to effectively and equitably enforce these policies,” said Travis Lloyd, a doctoral student at Cornell Tech and one of the researchers who initiated the project in 2023.

The study found that AI rules are most common in subreddits focused on art and celebrity topics. These communities often share visual content, and their rules frequently address concerns about the quality and authenticity of AI-generated images, audio and video. Larger subreddits were also significantly more likely to have these rules, reflecting growing concerns about AI among communities with larger user bases.

“This paper uses community rules to provide a first view of how our online communities are contending with the potential widespread disruption that is brought by generative AI,” said co-author Mor Naaman, professor at the Jacobs Technion-Cornell Institute at Cornell Tech, and of information science in the Cornell Ann S. Bowers College of Computing and Information Science. “Looking at actions of moderators and rule changes gave us a unique way to reflect on how different subreddits are impacted and are resisting, or not, the use of AI in their communities.”

As generative AI evolves, the researchers urge platform designers to prioritize the community concerns about quality and authenticity exposed in the data. The study also highlights the importance of “context-sensitive” platform design choices, which consider how different types of communities take varied approaches to regulating AI use.

For example, the research suggests that larger communities may be more inclined to use formal, explicit rules to maintain content quality and govern AI use. In contrast, closer-knit, more personal communities may rely on informal methods, such as social norms and expectations.

“The most successful platforms will be those that empower communities to develop and enforce their own context-sensitive norms about AI use. The most important thing is that platforms do not take a top-down approach that forces a single AI policy on all communities,” Lloyd said. “Communities need to be able to choose for themselves whether they want to allow the new technology, and platform designers should explore new moderation tools that can help communities detect the use of AI.”

By making their dataset public, the researchers aim to enable future studies that can further explore online community self-governance and the impact of AI on online interactions.




Cornell University

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

The Machine Ethics podcast: moral agents with Jen Semler

In this episode, Ben and Jen Semler talk about what makes a moral agent, the point of moral agents, philosopher and engineer collaborations, and more.

Extending the reward structure in reinforcement learning: an interview with Tanmay Ambadkar

  23 Feb 2026
Find out more about Tanmay's research on RL frameworks, the latest in our series meeting the AAAI Doctoral Consortium participants.

The Good Robot podcast: what makes a drone “good”? with Beryl Pong

  20 Feb 2026
In this episode, Eleanor and Kerry talk to Beryl Pong about what it means to think about drones as “good” or “ethical” technologies.

Relational neurosymbolic Markov models

and   19 Feb 2026
Relational neurosymbolic Markov models make deep sequential models logically consistent, intervenable and generalisable

AI enables a Who’s Who of brown bears in Alaska

  18 Feb 2026
A team of scientists from EPFL and Alaska Pacific University has developed an AI program that can recognize individual bears in the wild, despite the substantial changes that occur in their appearance over the summer season.

Learning to see the physical world: an interview with Jiajun Wu

and   17 Feb 2026
Winner of the 2019 AAAI / ACM SIGAI dissertation award tells us about his current research.

3 Questions: Using AI to help Olympic skaters land a quint

  16 Feb 2026
Researchers are applying AI technologies to help figure skaters improve. They also have thoughts on whether five-rotation jumps are humanly possible.

AAAI presidential panel – AI and sustainability

  13 Feb 2026
Watch the next discussion based on sustainability, one of the topics covered in the AAAI Future of AI Research report.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence