Characterizing and Detecting Hateful Users on Twitter
March 23, 2018 Β· Declared Dead Β· π International Conference on Web and Social Media
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Manoel Horta Ribeiro, Pedro H. Calais, Yuri A. Santos, VirgΓlio A. F. Almeida, Wagner Meira
arXiv ID
1803.08977
Category
cs.CY: Computers & Society
Cross-listed
cs.SI
Citations
279
Venue
International Conference on Web and Social Media
Last Checked
2 months ago
Abstract
Most current approaches to characterize and detect hate speech focus on \textit{content} posted in Online Social Networks. They face shortcomings to collect and annotate hateful speech due to the incompleteness and noisiness of OSN text and the subjectivity of hate speech. These limitations are often aided with constraints that oversimplify the problem, such as considering only tweets containing hate-related words. In this work we partially address these issues by shifting the focus towards \textit{users}. We develop and employ a robust methodology to collect and annotate hateful users which does not depend directly on lexicon and where the users are annotated given their entire profile. This results in a sample of Twitter's retweet graph containing $100,386$ users, out of which $4,972$ were annotated. We also collect the users who were banned in the three months that followed the data collection. We show that hateful users differ from normal ones in terms of their activity patterns, word usage and as well as network structure. We obtain similar results comparing the neighbors of hateful vs. neighbors of normal users and also suspended users vs. active users, increasing the robustness of our analysis. We observe that hateful users are densely connected, and thus formulate the hate speech detection problem as a task of semi-supervised learning over a graph, exploiting the network of connections on Twitter. We find that a node embedding algorithm, which exploits the graph structure, outperforms content-based approaches for the detection of both hateful ($95\%$ AUC vs $88\%$ AUC) and suspended users ($93\%$ AUC vs $88\%$ AUC). Altogether, we present a user-centric view of hate speech, paving the way for better detection and understanding of this relevant and challenging issue.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Computers & Society
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Artificial Intelligence: the global landscape of ethics guidelines
R.I.P.
π»
Ghosted
The role of artificial intelligence in achieving the Sustainable Development Goals
R.I.P.
π»
Ghosted
Green AI
R.I.P.
π»
Ghosted
Principles alone cannot guarantee ethical AI
R.I.P.
π»
Ghosted
Tackling Climate Change with Machine Learning
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted