advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Analysis of Reddit reveals how complex toxicity is to address

How toxic is Reddit really? Note that we’re not asking whether or not the so-called “front page of the internet” is toxic, but rather how toxic can it be.

The simple answer to that question is that one in eight Reddit users create toxic posts. However, an analysis of the website has presented some interesting findings that could help to address toxicity on Reddit, and potentially other online communities.

This week, Hind Almerekhi at Hamad Bin Khalifa University in Qatar together with colleagues published the article “Investigating toxicity changes of cross-community redditors from 2 billion posts and comments” in computer science journal PeerJ.

“With the aid of crowdsourcing, we built a labeled dataset of 10,083 Reddit comments, then used the dataset to train and fine-tune a Bidirectional Encoder Representations from Transformers (BERT) neural network model. The model predicted the toxicity levels of 87,376,912 posts from 577,835 users and 2,205,581,786 comments from 890,913 users on Reddit over 16 years, from 2005 to 2020,” reads the extract of the article.

The comments that were analysed came from the top 100 sub-Reddits and thereof doesn’t account for smaller communities that could potentially be more or less toxic.

As mentioned, 16.11 percent of users analysed posted toxic posts while only 13.28 percent published toxic comments.

What makes this study all the more interesting though is the change in behaviour that the researchers observed. What the researchers found was that users would change the toxicity of their post depending on where they were posting the content.

This change, the researchers say, could be influenced by the community the user is posting in or simply changes in how they feel.

Why try and quantify this at all? Well for one, it could help prevent the spread of toxicity online. The researchers say that using a model such as the one they used, moderators could have access to more data than they currently do regarding a person’s toxicity.

“For instance, instead of banning a user for a tasteless contribution they left once, moderators can consider the users’ predominant toxicity and that of their previous content. This approach will prevent automated bots and moderators from excessively penalizing or banning users. This sophisticated user- and content-based toxicity assessment allows moderators to control toxicity and detect malicious users who deserve banning from online communities,” write the researchers.

While it could be argued that one tasteless comment is too many, this approach could potentially push banned users to fringe communities where they become more toxic.

This is an area where additional research is required. For instance, this analysis didn’t account for subjectivity, context or categorical characteristics of toxic content.

While usually one has to purchase research of this nature, you can read the entire article for free at the link above. We highly recommend giving this one a read, especially if you’re a moderator for a community.

[Via – New Scientist]

advertisement

About Author

advertisement

Related News

advertisement