This project led by Dr. Jeff Gill, School of Public Affairs, Dr. Nathalie Japkowicz, College of Arts and Sciences, and Professor Wendy Melillo, School of Communication, studies coded antisemitic hate speech and trope evolution with the goal of developing machine-learning detection and monitoring tools.
Antisemitic Terminology Detection
Coded Term Discovery for Online Hate Speech Detection
Online hate speech proliferation has created a difficult problem for social media platforms. A particular challenge relates to the use of coded language by groups interested in both creating a sense of belonging for its users and evading detection.
Note: Access the GitHub site that contains material related this paper.
Online Hate Speech Detection
Seeking Optimal Human/Machine Collaborative Practice in Antisemitic Terminology Detection
This study is concerned with the use of antisemitic language on loosely moderated extremist alt-right social media. It compares, contrasts, and combines human- and machine-based approaches for discovering antisemitic coded or non-coded terminology in such media.
Can GPT-4 Detect Subcategories of Hatred?
The purpose of this paper is to investigate how well the powerful GPT-4 Large Language Model (LLM) can identify the trope expressed in a hateful social media post. In particular, we investigate the performance of GPT-4 on the problem of
trope classification in Islamophobia and antisemitism. The results suggest that GPT-4 still presents significant gaps in understanding human text in the context of hate speech, opening the door to future research in that area.
Racism and Intolerance on Social Media
Monitoring The Evolution Of Antisemitic Hate Speech On Extremist Social Media
Racism and intolerance on social media contribute to a toxic online environment which may spill offline to foster hatred, and eventually lead to physical violence. That is the case with online antisemitism, the specific category of hatred
considered in this study.