Take a look at the comment sections around the web that still exist and the chances are high that you’ll run across toxicity in some form or another.
In an attempt to lessen the amount of abusive comment in discussions Google and Jigsaw have partnered to create Perspective.
This piece of software uses machine learning to identify comments which may be considered off-colour.
The plan for perspective is to make the API available to publishers so that it is easier to sort toxic comments from comments which contribute to a discussion.
“To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers,” president of Jigsaw Jared Cohen said in a blog post.
“Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.”
Perspective is currently being tested with The New York Times where moderators are able to sort through comments faster.
The technology could also be implemented in the comments section of a website to show a commentator how their words could be seen as toxic should that be the case.
We hope to see this little bit of software gain some proliferation around the web. Who knows, perhaps telling the toxic trolls that infest comment sections that why they are doing is wrong as they’re doing it will make them delete that nasty comment about your hairstyle.
Brendyn Lotz writes news, reviews, and opinion pieces for Hypertext. His interests include SMEs, innovation on the African continent, cybersecurity, blockchain, games, geek culture and YouTube.
Google can use machine learning to tell you if your comments are toxic
Take a look at the comment sections around the web that still exist and the chances are high that you’ll run across toxicity in some form or another.
In an attempt to lessen the amount of abusive comment in discussions Google and Jigsaw have partnered to create Perspective.
This piece of software uses machine learning to identify comments which may be considered off-colour.
The plan for perspective is to make the API available to publishers so that it is easier to sort toxic comments from comments which contribute to a discussion.
“To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers,” president of Jigsaw Jared Cohen said in a blog post.
“Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.”
Perspective is currently being tested with The New York Times where moderators are able to sort through comments faster.
The technology could also be implemented in the comments section of a website to show a commentator how their words could be seen as toxic should that be the case.
We hope to see this little bit of software gain some proliferation around the web. Who knows, perhaps telling the toxic trolls that infest comment sections that why they are doing is wrong as they’re doing it will make them delete that nasty comment about your hairstyle.
About Author
Brendyn Lotz
Related News
Amid a job crisis, should we be trying to lure digital nomads here?
X fights Australia to keep violent video on its platform globally
Meta starts licensing headset OS to Lenovo, ASUS, and more
SPEEDRUN – FlySafair weirdly quiet about accident during takeoff
Passengers stuck for hours as Gautrain loses power
[UPDATED] FlySafair puts statement about aircraft incident behind pay-wall