Skip to main content
hate
hate
Article

Cambridge Researchers Launch ‘Hate O’Meter’ Software to Treat ‘Hate Speech’ as ‘Malware’

B

Boi Boi

@yobos · Jan 4 · 2 min read

Story · Cambridge Researchers Launch ‘Hate O’Meter’ Software to Treat ‘Hate Speech’ as ‘Malware’

Story so far

No structured memory yet.

Researchers at the UK’s University of Cambridge have created a new software technology that treats online “hate speech” as a computer “virus” or “malware.” The “Hate O’Meter” warns users they are about to view “hate speech” and gives them the option of viewing the content in advance.

Authors of the research Stefanie Ullmann and Marcus Tomalin assert in the journal Ethics and Information Technology that establishment technology platforms such as Facebook, Twitter, and Google merely react to offensive posts by removing them if there are sufficient complaints about them.

Such “reactive” methods, the researchers say, still “cause the recipients psychological harm” because users view the content first, meaning “the harm has already been inflicted.”

Additionally, the “reactive” process leads to charges of censorship, the authors state.

Ullmann and Tomalin recommend, instead, their “Hate Speech detection” software “that is analogous to the quarantining of malicious computer software.”

The researchers explain: If a given post is automatically classified as being harmful in a reliable manner, then it can be temporarily quarantined, and the direct recipients can receive an alert, which protects them from the harmful content in the first instance. The quarantining framework is an example of more ethical online safety technology that can be extended to the handling of Hate Speech. Crucially, it provides flexible options for obtaining a more justifiable balance between freedom of expression and appropriate censorship. As Campus Reform noted , the software would use a “sophisticated algorithm” that examines a particular post as well as all content posted by that user to make a determination of “hate speech.”

If it is decided a post is not potential “hate speech,” it continues in the social media feed. However, if marked as possible “hate speech,” readers are shown a window that offers them the choice of opting in to view the post.

The authors note their research on the “Hate Speech detection” paper is “funded by the Humanities and Social Change International Foundation.”

Be the first to react

Support

Tip the reporter

Direct via M-Pesa. Goes straight to the byline.

Send via M-Pesa →

Stay close

Get the next story

One email when something breaks. No spam.

Subscribe by email →

Share

Post to Status

Save the portrait card for WhatsApp Status / IG Stories.

Save image →

More from Boi Boi

Tip