ΑΙhub.org
 

Dataset reveals how Reddit communities are adapting to AI


by
25 April 2025



share this:

A watercolour illustration in two strong colours showing the silhouettes of four people, two of whom have dogs on leads. They all cast shadows, and vary between realistic representations and those formed by representations of algorithms, data points or networks. The people and their data become indistinguishable form each other.Jamillah Knowles / Data People / Licenced by CC-BY 4.0

By Grace Stanley

Researchers at Cornell Tech have released a dataset extracted from more than 300,000 public Reddit communities, and a report detailing how Reddit communities are changing their policies to address a surge in AI-generated content.

The team collected metadata and community rules from the online communities, known as subreddits, during two periods in July 2023 and November 2024. The researchers will present a paper with their findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems being held April 26 to May 1 in Yokohama, Japan.

One of the researchers’ most striking discoveries is the rapid increase in subreddits with rules governing AI use. According to the research, the number of subreddits with AI rules more than doubled in 16 months, from July 2023 to November 2024.

“This is important because it demonstrates that AI concern is spreading in these communities. It raises the question of whether or not the communities have the tools they need to effectively and equitably enforce these policies,” said Travis Lloyd, a doctoral student at Cornell Tech and one of the researchers who initiated the project in 2023.

The study found that AI rules are most common in subreddits focused on art and celebrity topics. These communities often share visual content, and their rules frequently address concerns about the quality and authenticity of AI-generated images, audio and video. Larger subreddits were also significantly more likely to have these rules, reflecting growing concerns about AI among communities with larger user bases.

“This paper uses community rules to provide a first view of how our online communities are contending with the potential widespread disruption that is brought by generative AI,” said co-author Mor Naaman, professor at the Jacobs Technion-Cornell Institute at Cornell Tech, and of information science in the Cornell Ann S. Bowers College of Computing and Information Science. “Looking at actions of moderators and rule changes gave us a unique way to reflect on how different subreddits are impacted and are resisting, or not, the use of AI in their communities.”

As generative AI evolves, the researchers urge platform designers to prioritize the community concerns about quality and authenticity exposed in the data. The study also highlights the importance of “context-sensitive” platform design choices, which consider how different types of communities take varied approaches to regulating AI use.

For example, the research suggests that larger communities may be more inclined to use formal, explicit rules to maintain content quality and govern AI use. In contrast, closer-knit, more personal communities may rely on informal methods, such as social norms and expectations.

“The most successful platforms will be those that empower communities to develop and enforce their own context-sensitive norms about AI use. The most important thing is that platforms do not take a top-down approach that forces a single AI policy on all communities,” Lloyd said. “Communities need to be able to choose for themselves whether they want to allow the new technology, and platform designers should explore new moderation tools that can help communities detect the use of AI.”

By making their dataset public, the researchers aim to enable future studies that can further explore online community self-governance and the impact of AI on online interactions.




Cornell University




            AIhub is supported by:



Related posts :



Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence