What is it about?

Conspiracy theories about politics, health or technology now appear everywhere on social media. They can undermine trust in democratic institutions and in experts, but we know surprisingly little about how they actually spread from person to person. In this study we followed 98 politically active accounts on X (formerly Twitter) in Poland over 15 months. We collected almost 15 million tweets, retweets, replies and quote tweets from their conversations. For each account, we mapped who they interacted with and whether any of these contacts shared conspiracy-theory content. We then compared four types of users: people who never shared conspiracy theories, people who shared them throughout the whole period, people who started sharing them, and people who stopped. The key finding is that what matters most is not how many conspiracy-minded users you see overall, but what share of your close contacts share such content. Even if only 1–5 out of 100 accounts in someone’s immediate network spread conspiracy theories, the chances that this person will also share them rise sharply. We also find that one-off interactions can be surprisingly powerful: even a single contact with a conspiracy-sharing account can matter more than repeated exchanges. Overall, the study shows that conspiracy theories can spread through ordinary political conversations among a small minority of users, rather than only within large fringe communities.

Featured Image

Why is it important?

This article shows that a very small minority of conspiracy-minded contacts can strongly influence how political content spreads on X (formerly Twitter). Using detailed network data from nearly 15 million posts, it demonstrates that the proportion of conspiracy-sharing accounts in a person’s immediate network is more important than the total number of such accounts. Even when only 1–5% of close contacts share conspiracy theories, the likelihood that someone will start or continue spreading them increases sharply. The study also challenges a common assumption in diffusion research: it finds that weak, one-off ties are often more influential than repeated interactions. This suggests that conspiracy theories do not always need strong social reinforcement to spread online. These insights are timely for researchers, journalists and platform designers who seek to limit the harms of online misinformation. They suggest that early interventions targeting small pockets of exposure and everyday conversational networks may be more effective than focusing only on highly visible viral posts or large extremist communities.

Read the Original

This page is a summary of: Social influence and network structure: How conspiracy theories spread on social media, Social Science Research, January 2026, Elsevier,
DOI: 10.1016/j.ssresearch.2025.103282.
You can read the full text:

Read

Contributors

The following have contributed to this page