Following Criticism, Twitter Promises Money to Anyone Who Finds Biases in Its Algorithm

The competition offers prizes of up to $3,500.

Loukia Papadopoulos
Following Criticism, Twitter Promises Money to Anyone Who Finds Biases in Its Algorithm

San Francisco-based Twitter is introducing the first-ever algorithmic bias bounty challenge and the initiative could transform the industry.

“Finding bias in machine learning (ML) models is difficult, and sometimes, companies find out about unintended ethical harms once they’ve already reached the public. We want to change that.” wrote in a blog Twitter executives Rumman Chowdhury and Jutta Williams.

“We’re inspired by how the research and hacker communities helped the security field establish best practices for identifying and mitigating vulnerabilities in order to protect the public. We want to cultivate a similar community, focused on ML ethics, to help us identify a broader range of issues than we would be able to on our own. With this challenge, we aim to set a precedent at Twitter, and in the industry, for proactive and collective identification of algorithmic harms.”

In light of this, the firm is sharing its saliency model and the code used to generate a crop of an image given a predicted maximally salient point and asking participants to evaluate it and weed out any potential bias.

Winners will receive $3,500 for first place, $1,000 for second place, $500 for third place, $1,000 for Most Innovative and $1,000 for Most Generalizable. The contest is a great move by Twitter in showcasing the company in a positive light and it is one that may very well pay off.

A study published last April found that people still think computer program algorithms are more trustworthy than their fellow humans — especially when the task is challenging. 

However, it should be noted that in September of 2020, Twitter’s photo preview feature was found to potentially have a racial bias. Twitter at the time thanked everyone for finding the bug and claimed that they hadn’t come across any evidence of racial and/or gender bias when they tested the algorithm but that clearly more work needed to be done.

Winners will be revealed at the DEF CON AI Village on August 8th.

message circleSHOW COMMENT ()chevron