Researchers at Justus Liebig University Giessen recently investigated the relationship between moralized language used in a tweet and hate speech found in the replies. Their findings indicate that the more moralized words are used in a tweet, the more likely the replies to the tweet will contain hate speech. This research may provide clues to what triggers the expression of hate speech in social media contexts.
Before social media, hate speech was usually limited to people one knew or discriminatory acts or words in movies or television shows. Today the act of disparaging fellow humans through hate speech can be a part of daily life through the internet. Anyone with a social media account is a potential victim of hate speech and will undoubtedly be exposed.
Hate speech weaving its way into our online social encounters may disrupt the belief in American social unity so badly the capacity for democracy to function may be permanently impaired. Kirill Solovev and Nicolas Pröllochs recognized that a better understanding of what may prompt individuals to respond with hate speech might eventually lead to innovations in combatting it.
In this study, Solovev and Pröllochs considered language to be moralized if “it references ideas, objects, or events constructed in terms of the good of a unit larger than the individual.” In order to determine if the moralized language on social media leads to hate speech in replies, the research team gathered posts and replies from Twitter. They examined 691,234 original tweets and 35.5 million replies. The tweets came from three categories of Twitter users: politicians, news people, and activists.
In the original tweets, they identified whether they contained moral or moral-emotional language. Then, the replies were investigated for hate speech. Hate speech for this study was defined as “abusive or threatening speech (or writing) that attacks a person or group, typically on the basis of attributes such as ethnicity, religion, sex, or sexual orientation.”
The language in the tweets was assessed via trained research assistants. First, assistants read each tweet and reply and identify the moral language or hate speech; another assistant would do the same to corroborate the work.
These efforts revealed that “each additional moral word was associated with between 10.66% and 16.48% higher odds of receiving hate speech. Likewise, each additional moral-emotional word increased the odds of receiving hate speech by between 9.35% and 20.63%.” The researcher’s team concludes that this data demonstrates that moralized language does predict hate speech in social media contexts.
Solovev and Pröllochs recognize that social media posts will never be free from moral language; despite this, the research team felt these results ” help to foster social media literacy but may also inform educational applications, counterspeech strategies, and automated methods for hate speech detection.”
The study “Moralized language predicts hate speech on social media“, was authored by Kirill Solovev and Nicolas Pröllochs.