Twitter to pay hackers to find bias in automatic image cropping after accusations



[ad_1]

Twitter is organize a competition in the hopes that hackers and researchers will be able to identify biases in its image-cropping algorithm – and that it will distribute cash prizes to winning teams (via Committed). Twitter is hoping that giving teams access to its code and image cropping template will allow them to find ways the algorithm could be harmful (such as cropping in a way that stereotypes or erases the topic of the story). ‘picture).

Competitors will be required to submit a description of their results and a set of data that can be used by the algorithm to demonstrate the problem. Twitter will then award points based on the type of damage detected, the magnitude of the potential impact on people, etc.

The winning team will receive $ 3,500, and there are separate prizes of $ 1,000 for the most innovative and generalizable discoveries. This amount made some noise on Twitter, with a few users saying so should have an extra zero. For context, Twitter’s normal bug bounty program would pay you $ 2,940 if you found a bug that allowed you to perform actions for someone else (like retweet a tweet or image) at the cross-site scripting help. Finding an OAuth issue that lets you take control of someone’s Twitter account would net you $ 7,700.

Twitter has already done its own research on its image cropping algorithm – in May it published an article about how the algorithm was biased, after accusations that its previews were racist. Twitter has mostly removed algorithmic crop previews since then, but it’s still in use on the desktop, and a good crop algorithm comes in handy for a business like Twitter.

Opening a contest allows Twitter to get comments from a much wider range of perspectives. For example, the Twitter team occupied a space to discuss the competition in which a team member mentioned receiving questions about caste-based biases in the algorithm, which may not be noticeable to software developers in California .

It’s not just an unconscious algorithmic bias that Twitter is looking for. The rubric has point values ​​for intentional and unintentional damage. Twitter defines unintentional damage as harvests that could result from a “well-meaning” user posting a regular image on the platform, while intentional damage is problematic cropping behavior that could be exploited by someone posting. maliciously crafted images.

Twitter states in its announcement blog that the contest is separate from its bug bounty program – if you submit an algorithmic bias report to Twitter outside of the contest, the company says your report will be closed and marked as not applicable . If you would like to enter, you can head to the HackerOne contest page to see the rules, criteria and more. Submissions are open until August 6 at 11:59 p.m. PT, and challenge winners will be announced at Def Con AI Village on August 9.



[ad_2]

Source link