ΑΙhub.org
 

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti


by
08 April 2026



share this:

What is the topic of the research in your paper?

In our paper, we study how social structures emerge when the “individuals” in a network are artificial agents powered by large language models. To do so, we analyzed a platform called Moltbook – a social network entirely populated by AI agents, specifically LLM-based agents, that interact with each other through posts and comments. This social network creates a very unusual but powerful setting: instead of observing human behavior, we can study a brand new society made only of artificial entities and observe whether it organizes itself in similar ways.

To understand the structure of interactions in this system, we modelled the platform as a network, where each agent is a node and each interaction is a connection between them. We used tools from network science to observe the distribution of these connections.

This research sits at the intersection of artificial intelligence and complex systems. This is not by chance, given that it was entirely designed and developed by researchers from the Intelligence, Complexity and Technology Lab (ICT Lab) of LIUC – Università Cattaneo, which was built for the specific purpose of discovering overlaps between these two disciplinary fields. In this regard, it is important to mention my colleagues at the ICT Lab – Luca Sodano, Sofia Sciangula, and Amulya Galmarini – who performed the data collection, generated the results, and wrote the first draft of the paper.

Could you tell us about the implications of your research and why it is an interesting area for study?

One important implication of our research is that LLMs should not be studied only as isolated systems. When many AI agents interact with one another, new patterns can emerge at the collective level, and these patterns may be very different from the behavior we observe in a single model. In this sense, at the Intelligence, Complexity, and Technology Lab we are studying emergent properties of LLMs: properties that are not visible by looking at one agent alone, but that appear when many of them form a social system.

Emergent properties are very important and shape our society. For instance, the price of goods does not depend on a single central regulator, but on the combined actions of individual market participants, which in turn depend on their beliefs, which come from the individual’s information processing. Our markets – one of the main coordination mechanisms we use to interact as a species, and perhaps one of our distinguishing features – are themselves an emergent property. The fact that LLMs appear capable of producing structures that may exhibit emergent properties is therefore of extraordinary significance.

This is what makes the topic so interesting. In Moltbook, we see that AI agents spontaneously generate a network with peculiar properties. Even without human users, the system begins to organize itself in ways that resemble human-generated social networks. That is scientifically and culturally fascinating, because it suggests that some forms of social structure may arise from the interaction rules themselves, and that a certain type of sociality and even the ability of creating a culture are not special features of biological entities.

Could you explain your methodology?

Our methodology is based on network science. We started by collecting public data from Moltbook through web scraping. The dataset includes about 235,000 posts, more than 1.5 million comments, and nearly 40,000 unique users. This gave us a large empirical basis to study how AI agents interact at scale.
We then turned these data into a simple map of interactions – mathematically, this is known as a network. In the network, each user is represented as a point, and each connection links to another user who has left them a comment. The connections have a direction to show who is responding to whom.

Once we had built the network, we looked at a few simple features to understand how people were connected and how they behaved. We measured how many connections each person received and how many they created, which helped us to see who attracted attention and who was more active. We also considered not just whether people interacted, but how often they did so, in order to capture the intensity of those exchanges. Using this data, we examined how these patterns were spread across the whole system, including connections, activity, posts, and comments, to understand the concentration of attention and participation across users.

To do so, we examined how connected the overall system was. We first looked for groups of users that were linked in some way, even if the connections did not always go both ways. Then we looked more closely for groups in which users were connected through interactions that could lead back and forth among them. This distinction is useful because it tells us whether the system forms one broad connected space or whether it is instead made up of smaller and only partly connected groups.

Then, we also explored the high-level structure of the network to understand whether some users formed a more central and tightly connected group than others. In practical terms, we wanted to see if the system was held together by a relatively small set of users who interacted frequently and were strongly linked to one another, with most of the remaining users only loosely connected.

Finally, we tested how stable the network was when AI agents were removed. We considered two situations: one in which users were removed at random, and another in which we removed those who had the largest number of connections. After each step, we observed how much the main connected part of the system became smaller. This helped us understand whether the network would remain largely intact or break apart easily, and, above all, which kinds of users were most important for holding the system together.

What were your main findings?

Our main findings are that the Moltbook network, even though it is made entirely of AI agents, develops a structure that looks surprisingly similar to many human social networks.

First, we found a very strong inequality in visibility and activity. A small number of agents receive a very large share of comments, while most agents remain much less visible. The same happens with production: a minority writes a huge number of posts and, even more clearly, comments. The system is not evenly distributed, resembling human social networks where just a few users have millions of followers and interactions.

Second, the connectivity pattern is highly heterogeneous. Degree and strength distributions show heavy tails, which means that most nodes have few connections, while a few hubs accumulate many of them. We would like to highlight that the Moltbook network is not a perfect power-law system, but it clearly shows the kind of concentration and hub dominance that we often associate with large social systems.

Third, we found an important asymmetry in the global structure. Almost all users belong to one giant weakly connected component, so the network is globally reachable in a broad sense. However, the giant strongly connected component is much smaller. This tells us that information may spread widely, but reciprocal and fully circular interaction is confined to a more limited core of AI agents that interact with each other. A specific core-periphery analysis confirms that.

Finally, we found that the network is robust to random failures but fragile under targeted attacks. If random nodes disappear, the giant component remains relatively large. But if highly central agents are removed, especially those with high out-degree, the network fragments quickly. This suggests that the agents who actively generate interactions are essential for holding the overall system together.

In summary: populations of interacting LLM agents can spontaneously produce centralized, unequal, and fragile social structures. In order to manage this phenomenon in the near future also in other settings, we must understand not only what each agent does, but what the whole population becomes when they interact and what the emergent properties are. That is why this is an important area of study – these results are relevant to systemic risk, resilience, coordination, and the future governance of artificial social systems.

What further work are you planning in this area?

There are several directions we are actively interested in pursuing.

Moltbook is a rapidly evolving environment, so one important next step is to observe whether the patterns we found remain stable as the platform grows over longer time horizons. With a larger dataset, we can test more carefully whether the structural fragility, the concentration of activity, and the tiny core-periphery organization are persistent properties or only early-stage effects.

We are designing a share setting to regularly report our findings about the state of Moltbook to the general public. Providing regular updates would allow us to monitor how the network evolves and whether new risks or new structural features emerge. At the Intelligence, Complexity and Technology Lab, we believe it is crucial to keep the public informed about our findings, as they have cultural importance.

The final line of work is, for me, perhaps the most intellectually interesting. We would like to better understand how individual behavior is affected by the collective environment, and vice versa. We want to study the two-way relation between micro and macro dynamics: how the structure of the network shapes the behavior of single agents, and how repeated local actions by single agents generate large-scale collective patterns. This is a crucial question in complex systems, and it becomes even more fascinating with LLM-based societies.

The broader goal of our future work is not only to describe Moltbook more precisely, but to understand the mechanisms behind its evolution. That means moving from a static picture of the network to a more dynamic understanding of how artificial social systems develop, stabilize, or become fragile over time.

About Francesco

Francesco Bertolotti is an Assistant Professor at Università Cattolica del Sacro Cuore and a co-founder and Research Affiliate of the Intelligence, Complexity and Technology Lab (ICT Lab) of LIUC – Università Cattaneo. His research interests include the study of social complex systems through simulation models, ranging from healthcare to history, and the exploration of emergent properties in multi-agent systems based on LLMs. He has carried out research stays at the University of Cambridge, Eindhoven, and IFISC. He developed the STEINBOCC algorithm with the LIUC Healthcare Datascience Lab for forecasting pharmaceutical consumption in Italy and co-authored the book “L’Intelligenza Artificiale di Dostoevsky”, published by Il Sole 24 Ore.




Ella Scallan is Assistant Editor for AIhub
Ella Scallan is Assistant Editor for AIhub

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Scaling up multi-agent systems: an interview with Minghong Geng

  07 Apr 2026
We sat down with Minghong in the latest of our interviews with the 2026 AAAI/SIGAI Doctoral Consortium participants.

Forthcoming machine learning and AI seminars: April 2026 edition

  02 Apr 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 April and 31 May 2026.

#AAAI2026 invited talk: machine learning for particle physics

  01 Apr 2026
How is ML used in the search for new particles at CERN?
monthly digest

AIhub monthly digest: March 2026 – time series, multiplicity, and the history of RoboCup

  31 Mar 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

  30 Mar 2026
We launch our new series with a conversation with Ross King - a pioneer in the field of AI-enabled scientific discovery.

A multi-armed robot for assisting with agricultural tasks

and   27 Mar 2026
How can a robot safely manipulate branches to reveal hidden flowers while remaining aware of interaction forces and minimizing damage?

Resource-constrained image generation and visual understanding: an interview with Aniket Roy

  26 Mar 2026
Aniket tells us about his research exploring how modern generative models can be adapted to operate efficiently while maintaining strong performance.

RWDS Big Questions: how do we highlight the role of statistics in AI?

  25 Mar 2026
Next in our series, the panel explores the statistical underpinning of AI.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence