Online hate communities are not confined to isolated corners of the internet. In a new study published in npj Complexity shows how these groups are increasingly intersecting with mainstream online spaces. Using advanced tools to map the online “hate universe,” researchers found that around 50 million accounts in hate communities grew closer not only to each other but also to a broad mainstream audience of billions in the weeks surrounding the 2020 United States presidential election.
The rapid evolution of online hate speech has created a pressing need for a deeper understanding of how it spreads, adapts, and impacts broader online communities. While previous research has examined the relationship between real-world events and the proliferation of hate speech, much of it treated hate groups as isolated entities or focused on single platforms.
This fragmented approach left a gap in understanding how hate networks function across multiple platforms and how they interact with mainstream communities. The new study addressed this gap by mapping the “hate universe” at an unprecedented scale and resolution.
“We hear a lot about online hate, particularly around the U.S. political system,” said study author Neil Johnson, a professor of Physics at the George Washington University and head of the Dynamic Online Networks Lab. “There is a common perception that such hate is caused by some extreme people scattered across the Internet. We set out to test if this is true. We found the opposite.”
The research team began with a list of hate communities identified through public resources, such as the Anti-Defamation League’s Hate Symbols Database and reports from the Southern Poverty Law Center. They used cross-posting links—instances where content from one community appeared in another—to expand their dataset. By tracing these connections, the researchers uncovered a web of interconnected hate communities. Each community was reviewed by subject matter experts to confirm its status as part of the hate network.
The result was a comprehensive map of the hate universe that accounted for billions of individual user accounts. These were not limited to members of hate communities but also included users in mainstream online spaces connected to the hate universe through cross-posting links.
The study focused on changes in the hate universe during the 2020 U.S. presidential election and its aftermath, particularly around key events like election day (November 3) and the Capitol attack (January 6, 2021). The researchers analyzed shifts in the network’s structure, including its interconnectedness and clustering, and examined the types of hate speech that surged during this period.
“Before our study, nobody had ever looked at this at scale — which is like going to a doctor and the doctor only diagnosing you based on what they literally see with their eyes,” Johnson told PsyPost.
“The good thing is that understanding it — and specifically having a map of it, like a ‘Google map’ of where hate lives and how it evolves — is the first step to managing it since it tells you where to intervene, like an X-ray tells a doctor where you need treatment.”
The researchers found that the 2020 election served as a catalyst for the growth and consolidation of the hate universe. The number of links between hate communities surged on election day, with a 41.6% increase compared to November 1. This spike intensified on November 7, the day Joe Biden was declared president-elect, with a 68% increase in new connections. The largest spike occurred around January 6, during the Capitol attack.
These new connections led to a “hardening” of the hate universe’s structure, as measured by standard network metrics. The clustering coefficient, which measures the tendency of nodes to form tightly connected groups, rose by 164.8% after January 6. This indicated that previously isolated hate communities were forming dense clusters, making the network more cohesive and resilient.
Similarly, the assortativity of the network, which reflects the tendency of similar nodes to connect, increased by 27%, suggesting that communities with similar characteristics were bonding more closely.
At the same time, the number of hate communities decreased by nearly 20%, while the size of the largest community grew by over 16%. This consolidation suggested that smaller groups were merging into larger, more dominant ones, creating a more unified and robust hate network.
“Hate increases significantly after the election, but not just in its volume—also in its organization,” Johnson said. “The hate networks link more to each other and become stronger. Elections help hate organize and strengthen.”
The researchers employed natural language processing models to analyze the content shared within these communities. These models were trained to classify hate speech into seven categories: race, gender, religion, antisemitism, gender identity or sexual orientation, immigration, and ethnic nationalism. The models demonstrated high accuracy, which was further validated by human reviewers.
The content shared within the hate universe changed significantly during the election period. For instance, anti-immigration rhetoric saw a dramatic surge of 269.5% in the days following November 7. Similarly, ethnically based hate and antisemitic content rose by 98.7% and 117.57%, respectively. Another wave of anti-immigration sentiment occurred around January 6, with a 108.69% increase compared to the days leading up to the Capitol attack.
One of the study’s most concerning findings was the increased interaction between hate communities and mainstream social media groups. During the election period, the hate universe expanded its reach into mainstream communities, creating new pathways for hate speech to flow into broader online spaces.
The researchers identified a significant uptick in cross-posting behaviors, where content from hate groups was shared in mainstream groups or communities. These connections blurred the boundaries between extremist networks and mainstream online spaces, effectively creating a bridge that allowed hate speech to penetrate wider audiences.
“Hate is ‘cooking’ in communities on the many smaller platforms, in communities that are strongly linked to each other (i.e., sharing content and opinions all the time), and they are linked directly into about 1 billion of the rest of us,” Johnson told PsyPost.
“So you may be part of a community of parents or pizza lovers on Facebook, but the comments and replies you see are coming straight from the large number of people across smaller platforms who have linked into your community without anyone knowing.”
This phenomenon highlights the challenge of addressing online hate. Efforts to moderate or disrupt hate networks may inadvertently push their rhetoric into mainstream communities, where it can evade detection or appear less overtly extremist.
So-called “fringe” content often “does not live at the fringe of the Internet,” Johnson said. “It is right next door to all of us — and we are probably all seeing it in the replies and comments to things in our everyday communities online.”
The findings have practical implications. The researchers argue that policies targeting only widely used platforms or particular communities are unlikely to effectively address hate and other online harms. “Policymakers have it wrong when they only pull the CEOs of the major companies in front of them to deal with online hate,” Johnson said.
The study, “How U.S. Presidential elections strengthen global hate networks,” was authored by Akshay Verma, Richard Sear, and Neil Johnson.