[ad_1]
Facebook announced on Friday that it had disabled a topic recommendation feature after linking black men mistakenly with “primates” in a video posted on the social network.
It happened in a UK media video where, at the end of it, Facebook asked if the user wanted to “keep watching primate videos.”
A company spokesperson called it a “clearly unacceptable bug” and said the referral software involved had been taken offline.
“We apologize to anyone who saw these offensive recommendations.“The company has retracted.” We turned off the entire topic recommendation feature as soon as we realized this was happening so that we could investigate the cause and prevent it from happening again. “
Facial recognition program has been severely criticized by civil rights activists, who point to accuracy issues in particular with people who are not white.
In recent days, Facebook users saw a video from a UK tabloid featuring black men and received an automatically generated notice asking them if they wanted to “keep watching primate videos,” according to the New York Times.
Darci Groves, former head of content design at Facebook, shared a screenshot of the recommendation.
“This ‘keep watching’ advice is unacceptable,” Groves wrote on Twitter. “It’s scandalous.”
Facial recognition, the controversy
Facial recognition is a technology highly developed by Amazon. AFP photo
Dani Lever, a spokesperson for Facebook, gave more details in a statement. And artificial intelligence that recognizes faces has been the protagonist of the controversy: “As we said, although we have made improvements to our AI, we know that it is not perfect. and we have more progress to make ”.
It’s not a new problem. Google, Amazon and other tech companies have been under scrutiny for years for bias within their artificial intelligence systems, especially when it comes to racial issues.
Studies have shown that facial recognition technology is biased against people of color and has a harder time identifying them, leading to incidents where black people have been discriminated against or arrested because of computer errors.
For example, in 2015, Google Photos incorrectly labeled images of blacks as “gorillas”. Google said it was “very sorry” and would do their best to resolve the issue immediately.
More than two years later, Wired discovered that Google’s solution was to censor the word “gorilla” from searches, while also blocking “chimp,” “chimpanzee” and “monkey.”
The artificial intelligences used to recognize faces are controversial. EFE Photo
Facebook has one of the world’s largest repositories of user-uploaded images in which to train its facial and object recognition algorithms. The company, which tailors content for users based on their previous browsing and viewing habits, sometimes asks people if they wish to continue viewing posts in related categories. It was not clear whether messages such as “primates” were prevalent.
Racist issues also caused internal problems for Facebook. In 2016, CEO Mark Zuckerberg urged employees to stop crossing out the phrase “Black Lives Matter” and copy as “All Lives Matter” in a common space at the company’s headquarters in Menlo Park, Calif.
Hundreds of employees also staged a virtual strike last year to protest the company’s handling of a publication by President Donald J. Trump on the murder of George Floyd in Minneapolis.
.
[ad_2]
Source link