US lawmakers will probe the biases of the algorithm



[ad_1]

Computer code

Copyright of the image
Getty Images

Legend

Some technology companies are already facing biases in their code

US politicians have suggested that computer algorithms show that they are free from any race, bad, and other bias before deployment.

Legislators drafted a bill that would require high-tech companies to test prototype algorithms to determine their bias.

Many organizations use coded instructions or algorithms for tasks such as showing users relevant ads, badyzing behavior, or sorting data.

Critics have said the bill could limit the benefits of artificial intelligence.

The rules would apply to companies generating annual revenues of $ 50 million (US $ 38 million) or possessing data relating to more than one million people.

  • Artificial intelligence: algorithms are scrutinized for potential biases
  • The race to manufacture the world's most powerful computer

Democratic Senator Ron Wyden, who helped draft the law, said it was necessary because computer algorithms were "more and more involved" in the lives of Americans.

"But instead of eliminating bias, these algorithms too often rely on biased badumptions or data that can actually reinforce discrimination against women and people of color," he said. declared.

A statement highlighting the need for the law, citing as evidence, an algorithm used by Amazon to help recruit staff, which is discriminatory against women. The algorithm has been abandoned.

Last month, the US Department of Housing and Urban Development filed a lawsuit against Facebook for allowing advertisers to limit the number of people who viewed ads for homes because of their race, religion, or of their nationality.

The bill was criticized by the Industrial Group of the Foundation for Information Technology and Innovation.

Daniel Castro, a spokesman for the foundation, said the law would only "stigmatize" the AI ​​and discourage its use.

"Keeping algorithms at a higher level than human decisions means that automated decisions are intrinsically less reliable and more dangerous than human decisions, which is not the case," he said in a statement.

[ad_2]
Source link