Even though a precise definition of hate speech is hard to pin down, it is a well-known phenomenon, pervasive online and, studies show, increasingly so. It is particularly insidious in that, as demonstrated by a review of the literature, exposure to hate speech primes individuals to experience feelings of contempt and behave in antisocial ways towards victims, both individually and collectively. This, of course, reinforces the use of hate speech.
A desire to understand the epidemic-like spread of hate speech prompted two Polish researchers to devise an agent-based mathematical model, wherein a simple set of rules played out across hundreds of independent, interacting digital agents in an iterative fashion can reveal important patterns and provides a model against which researchers can measure real-world observations. Their study, Hate Speech Epidemic. The Dynamic Effects of Derogatory Language on Intergroup Relations and Political Radicalization, appeared in Advances in Political Psychology.
“The problem of hate speech is increasingly visible both in people’s online environments, as well as in political discourses,” explained study author Michał Bilewicz, an associate professor of psychology at the University of Warsaw.
“Twenty years ago you would not find a single prominent politician using such language in his or her political speeches or statements. Many survey studies show that day by day our exposure to hate speech increases, both online and offline. This motivated our desire to explain dynamics of hate speech proliferation in social media and psychological consequences of being exposed to hate speech.”
“We used the term ‘hate speech epidemic’ because the pattern of hate speech proliferation resembles the spread of epidemic diseases,” Bilewicz said.
Following an extensive and highly insightful literature review, the authors set to defining their model. The literature reveals three important properties that the authors use as agent parameters.
First, all agents are given a level of contemptuous prejudice (CP), varied randomly from 0 to 1, but which changes over time as a result of exposure to hate speech. Second, individuals respond to levels of hate speech in their peers: adjusting up or down to match it. Thus, an agent’s neighbors determine in part its trajectory in use of hate speech. Third, individuals vary in their emotional reaction to hate speech, which becomes blunted with exposure.
According to their model, agents don’t engage in hate speech if: they already have a low level of CP; or they are pressured by anti-discriminatory norms (neighbors don’t use hate speech). Agents use hate speech if: they have a high level of CP and their neighbors use hate speech (radicalization); they have a high level of CP and their neighbors don’t use hate speech, but they fail to recognize it (“faux pas”).
In each iteration, an agent who is exposed to hate speech will increase in their level of contemptuous prejudice and decrease in their sensitivity to hate speech (and thus their ability to identify it or respond to social pressure).
The results of the model demonstrate just how easily a population can slide into common and prevalent use of hate speech, and helps examine why and how this might occur.
It’s important to remember that this is only a model. Nonetheless, it does provide some important insights. For example, in almost all cases where agents did not use hate speech despite high levels of contempt, it’s because their neighbors were “low-prejudiced nonhaters.” Likewise, individuals with relatively low levels of CP may still engage in hate speech based on their neighbors’ propensity to do so. Additionally, agents tended to form many, small clusters of hate, rather than large, centralized clumps of hate speech.
“The main message from this study is quite fatalistic,” Bilewicz told PsyPost. “People immersed in social media become less sensitive to hate speech, they start treating offensive language as something normal. This, in turn, alters their perception of minorities, immigrants, or gay people — as well as emotions felt toward these groups. Finally, they start using hate speech themselves. We observed that this process is a consequence of moving from traditional media as key sources of information to social media and online journalism.”
The authors also provide some real-world evidence to corroborate their model. Unregulated social network Gab, absent moderators and where hate-speech policies have been deliberately forgone, was witness to steady increases in hate speech from 2016 to 2018. Furthermore, new Gab members became more rapidly hateful as hate speech increased, and language used in the Gab community became more homogeneously hateful. Finally, hateful users were more likely to become popular in their network and become influencers.
The model, as the authors state, is limited by its simplicity and the fact that there is a stepwise movement towards ever more hate speech, curbed only by social norms. In the real world, these are clearly not the only influences. Individuals are capable of becoming less prone to use hate speech, even less contemptuously prejudice, with education, time, and experiences. However, there is an undeniable growth in hate speech in many parts of society.
“The emotional nature of how hate speech affects its recipients is still unclear. The term ‘hate speech’ suggests that hate is the key emotion elicited by offensive language. Our studies suggest that it is contempt, and sometimes even enjoyment, rather than hate. The fact that offensive language can generate positive emotions might be responsible for its very rapid spread in the society,” Bilewicz told PsyPost.
These simple but powerful observations about hate speech require experimental and observational evidence to better understand why and how hate speech spreads, and what we can do to combat it. If mere exposure to hate speech dulls individuals to its harmful nature, as has been shown time and again, what avenues are there for reversing these effects?
The present research suggests that, on a personal level, speaking up against hate speech in one’s own social circles can greatly limit its spread. Doing so at the levels of social media as well could powerfully curb hate speech tendencies. The model is based on agent interaction, rather than spontaneous self-education, so its implications are necessarily social in nature, but no less powerful for this limitation.
Scientific literature shows that hate speech paves the way for more violent and aggressive anti-social, widespread behaviors, all the way up to genocide. The Nazis made studied and cruelly reasoned use of these observations to convince entire populations of the necessity to exterminate the Jews, while Hutu propaganda before the Rwandan genocide used similar tactics.
Most frighteningly, even social media platforms with strict anti-hate speech policies seem to have great difficulty curbing the growth of hate speech.
Globally, hate speech is on the rise. Simple countermeasures, like speaking out against hate speech when one encounters it, can be highly effective, but understanding the sociopsychological mediators of its spread are paramount, be they modelled mathematically or observed empirically.
“The key question to be addressed is how to translate our knowledge about the hate speech epidemic into possible effective strategies counteracting the proliferation of hate speech,” Bilewicz said. “Currently, working with companies applying artificial intelligence in social media environments, we are experimenting with real life interventions against hate speech that are based on our theoretical models that were developed in laboratory experiments and correlational studies.”