Twitter’s photo cropping algorithm prefers young, beautiful, fair-skinned faces



[ad_1]

Twitter has announced the results of an open competition to find algorithmic biases in its photo-cropping system. The company has disabled automatic cropping in March after experiments conducted by Twitter users last year, he suggested he preferred white faces to black faces. He then launched an algorithmic bug bounty to try and analyze the problem more closely.

The competition confirmed these earlier findings. The top-placed entry showed that Twitter’s cropping algorithm favors faces “thin, young, of fair or warm skin color and smooth skin texture, and with stereotypical feminine facial features.” The second and third entries showed that the system was biased against people with white or gray hair, suggesting age discrimination, and favored English over Arabic in the pictures.

In a presentation of these results at the DEF CON 29 conference, Rumman Chowdhury, director of the META team at Twitter (which studies the ethics, transparency and accountability of machine learning), congratulated attendees for having showed the real effects of algorithmic bias.

“When we think of biases in our models, it’s not just academic or experimental aspects. […] but how it also works with our way of thinking in society, ”said Chowdhury. “I use the expression ‘life imitating art imitating life.’ We create these filters because we think that’s what’s beautiful, and that ends up shaping our models and driving these unrealistic notions of what it means to be attractive.

The winning entry used a GAN to generate faces that varied by skin tone, width, and male characteristics compared to females.
Image: Bogdan Kulynych

Bogdan Kulynych, a graduate student from EPFL, a research university in Switzerland, took first place in the competition and took home the prize of $ 3,500. Kulynych used an AI program called StyleGAN2 to generate a large number of lifelike faces that he modified based on skin color, female vs. male facial features, and thinness. He then fed these variations into Twitter’s photo cropping algorithm to find the one he preferred.

As Kulynych notes in his summary, these algorithmic biases amplify biases in society, literally eliminating “those who do not meet the algorithm’s preferences for body weight, age, skin color.”

Such prejudices are also more prevalent than you might think. Another contest participant, Vincenzo di Cicco, who won a Special Mention for his innovative approach, showed that the image cropping algorithm also favors emoji with lighter skin tones over emoji with tones. darker skin. Third place, by Roya Pakzad, founder of the technology advocacy organization Taraaz, revealed that prejudices also extend to written articles. Pakzad’s work compared memes using English and Arabic script, showing that the algorithm regularly cropped the image to highlight the English text.

Example of memes Roya Pakzad used to examine English language bias in Twitter’s algorithm.
Image: Roya Pakzad

While the results of Twitter’s competition for biases may seem daunting, confirming the pervasive nature of societal biases in algorithmic systems, they also show how tech companies can tackle these issues by opening their systems to external scrutiny. “The ability of people in a competition like this to delve into a particular type of prejudice or prejudice is something that corporate teams don’t have the luxury of doing,” Chowdhury said.

Twitter’s open approach contrasts with responses from other tech companies facing similar issues. When researchers led by Joy Buolamwini of MIT discovered racial and gender bias in Amazon’s facial recognition algorithms, for example, the company launched a substantial campaign to discredit those involved, calling their work “deceptive” and false”. After struggling for months over the results, Amazon finally gave in, temporarily banning the use of those same algorithms by law enforcement.

Patrick Hall, Twitter competition judge and AI researcher working on algorithmic discrimination, pointed out that such biases exist in all AI systems and companies need to work proactively to find them. “AI and machine learning are just the Wild West, no matter how skilled your data science team is,” Hall said. “If you can’t find your bugs, or if the bug bounties don’t find your bugs, then who finds your bugs?” Because you definitely have some bugs.



[ad_2]

Source link