What is it about?

The article studies why most hate speech posts connected to leading Spanish news media on X (Twitter) remain online. It examines 2.1M messages to see how hate type, outlet, and engagement affect whether posts are deleted or persist over time. The authors propose an "inverted pyramid" model that challenges traditional "pyramid of hate" frameworks. Rather than intensity determining deletion, algorithmic virality and engagement supersede severity in moderation decisions. This reflects how social platforms operate as self-reproducing systems that reward emotional engagement over ethical considerations.

Featured Image

Why is it important?

This research is important for several critical reasons: ​ 1. Challenges the effectiveness of current moderation The finding that 88% of hate messages persist after three years reveals a fundamental failure in content moderation systems. Most platforms claim to remove hate speech, but this study demonstrates that the vast majority actually stays online indefinitely—contradicting public narratives about platform safety. 2. Reveals flawed moderation logic The study shows that platforms don't moderate based on severity or harm intensity. Instead, algorithmic virality and engagement determine what gets deleted. This means the most damaging messages—those that spark high engagement—are precisely those most likely to survive, creating a perverse incentive structure that amplifies harm. 3. Explains real-world harms to vulnerable groups Persistent hate speech functions as a latent resource for harm against women, immigrants, LGBTQ+ communities, and other marginalized groups. Undeleted messages serve as cyclically reactivated nodes during social crises, repeatedly surfacing to normalize dehumanizing narratives and fuel offline violence. 4. Identifies systemic structural problems The research demonstrates this isn't a detection or capacity problem—it's structural. Platforms operate as autopoietic systems that self-reproduce through emotional engagement, regardless of ethical considerations. This is by design, not accident. 5. Provides actionable insights for policy and regulation By identifying that categorical factors (hate type, media source) and engagement patterns drive persistence, the study suggests that effective moderation requires human-AI collaborative approaches rather than purely automated systems, informing regulatory and platform policy discussions. This is particularly urgent in the Spanish context, where far-right political movements and polarization have intensified hate speech targeting specific groups.

Perspectives

Based on the comprehensive analysis in this research, here are the key perspectives emerging from the study: ​ Platform & Systemic Perspective The research fundamentally reframes how we understand content moderation. Rather than a technical or capacity problem, hate speech persistence reveals an autopoietic system—a self-reproducing ecosystem where algorithms reward engagement regardless of ethical harm. Social platforms operate as closed systems that amplify emotionally charged content through dynamic homophily, creating self-reinforcing cycles where virality supersedes severity. Policy & Regulatory Perspective The findings expose critical gaps in current regulation and platform accountability. Traditional moderation frameworks assume intensity drives deletion decisions, but the data shows 88% of hate messages survive because moderation operates through categorical pattern recognition, not severity assessment. This creates a compliance illusion: platforms claim to moderate while algorithmically protecting high-engagement content—including the most damaging messages. Academic Contribution The research challenges established theoretical models. The "pyramid of hate" assumes escalation from incivility to threats, with increasingly severe consequences. The study proposes an "inverted pyramid" instead: algorithmic virality determines survival, not harm intensity. This shifts understanding from individual message-level decisions to structural platform dynamics rooted in engagement-driven business models. News Media Accountability Digital news outlets amplify hate through editorial choices and audience dynamics. The correlation between media coverage and hate message persistence (r=0.67) indicates that news outlets function as distribution nodes rather than moderators. Political hate showed the lowest deletion rate (10.92%), suggesting editorial alignment facilitates message survival within news cycles. Vulnerable Groups Perspective The research identifies a particularly troubling asymmetry: messages targeting women, LGBTQ+ communities, and immigrants persist despite sophisticated evasion strategies (coded language, slur misspelling, doxxing threats). Sexual hate deletion rates (11.14%) barely exceed xenophobic hate (11.10%), despite qualitatively distinct harms involving privacy violation and offline violence—revealing algorithmic indifference to contextual harm. Operational & Latency Framework The study introduces the concept of "operational latency": undeleted messages function as dormant nodes cyclically reactivated during social crises. This perpetuates dehumanizing narratives not through constant visibility but through episodic reanimation, normalizing extreme discourse while eroding institutional media capacity to shape public agendas. Intervention & Solutions Perspective Rather than purely automated systems, the research advocates for human-AI collaborative moderation with semantic analysis capable of detecting subtle hate expressions within context. Critical interventions include independent monitoring systems separate from platforms, pre-bunking techniques, and editorial protocols tailored to Spain's political and cultural context—particularly urgent given dismantling of verification programs post-2022.

Prof. Elias Said-Hung
Universidad Internacional de La Rioja

Read the Original

This page is a summary of: Why does hatred persist in X? Insights from the Spanish media, Media International Australia, January 2026, SAGE Publications,
DOI: 10.1177/1329878x251414982.
You can read the full text:

Read

Contributors

The following have contributed to this page