A recent study has revealed that the speech patterns found in hate speech communities on Reddit exhibit notable similarities to those in forums dedicated to certain psychiatric disorders. Conducted by researchers Dr. Andrew William Alexander and Dr. Hongbin Wang from Texas A&M University, this analysis appears in the open-access journal PLOS Digital Health and raises significant questions about the intersection of mental health and online behavior.
The rise of social media platforms has intensified concerns regarding their role in disseminating hate speech and misinformation, which can exacerbate issues of prejudice and discrimination. Previous research highlighted links between specific personality traits and the propensity to share hate speech or misinformation. However, the connection between psychological health and such online activities remained less clear until now.
To investigate these issues, Alexander and Wang employed artificial intelligence tools to analyze posts from 54 Reddit communities, focusing on those associated with hate speech, misinformation, and psychiatric disorders. The selected groups included r/ADHD, which discusses attention-deficit/hyperactivity disorder, r/NoNewNormal, known for disseminating COVID-19 misinformation, and r/Incels, a community banned for promoting hate speech.
Utilizing the large-language model GPT-3, the researchers converted thousands of posts from these communities into numerical representations, capturing the underlying speech patterns. These representations, referred to as “embeddings,” were then analyzed using machine-learning techniques and a mathematical method called topological data analysis.
The findings indicated a striking resemblance between the speech patterns in hate speech communities and those of individuals discussing complex post-traumatic stress disorder and certain personality disorders, specifically borderline, narcissistic, and antisocial personality disorders. While some connections to anxiety disorders were noted, the relationship between misinformation and psychiatric conditions appeared weaker.
Importantly, these results do not imply that individuals with psychiatric disorders are more likely to engage in hate speech or spread misinformation. The researchers noted that it was impossible to ascertain whether the posts analyzed were made by individuals diagnosed with any such conditions.
The authors propose that these findings could lead to innovative strategies for addressing online hate speech and misinformation, potentially incorporating therapeutic techniques used for psychiatric disorders. They emphasize that “the speech patterns of those participating in hate speech online have strong underlying similarities with those participating in communities for individuals with certain psychiatric disorders.”
Dr. Alexander elaborated on the implications of the study, stating, “While we looked for similarities between misinformation and psychiatric disorder speech patterns as well, the connections we found were far weaker. Most people engaging in or disseminating misinformation appear to maintain healthy psychological profiles.”
He also highlighted the potential impact of prolonged exposure to hate speech communities, suggesting that such environments may diminish empathy toward others over time. Dr. Alexander concluded, “It could be that the lack of empathy fostered by hate speech influences individuals, leading them to exhibit traits akin to those associated with Cluster B personality disorders.”
The study underscores the need for further exploration into the links between online behavior and psychological health, particularly as social media continues to shape interactions in the digital age. More research is necessary to clarify these complex relationships and develop effective interventions.
For more information on this study, refer to Alexander AW, Wang H. “Topological data mapping of online hate speech, misinformation, and general mental health: A large language model-based study.” PLOS Digital Health, published on July 29, 2025.
