Money, mimicry and mind control: Big Tech puts an end to the ethical brakes of AI



[ad_1]

SAN FRANCISCO, September 8 (Reuters) – In September last year, Google’s cloud unit (GOOGL.O) considered using artificial intelligence to help a financial company decide who to lend money to. money.

He dismissed the client’s idea after weeks of internal discussions, deeming the project too ethically risky because AI technology could perpetuate biases like race and gender.

Since the start of last year, Google has also blocked new AI features that analyze emotions, fearing cultural insensitivity, while Microsoft (MSFT.O) has restricted voice-mimicking software and IBM (IBM.N ) rejected a customer request for an advanced facial recognition system.

All of these technologies have been mastered by panels of executives or other executives, according to interviews with the AI ​​ethics chiefs of the three US tech giants.

Reported here for the first time, their vetoes and the deliberations that led to them reflect an emerging industry-wide desire to balance the pursuit of lucrative AI systems with a greater consideration of social responsibility.

“There are opportunities and harms, and our job is to maximize the opportunities and minimize the harms,” ​​said Tracy Pizzo Frey, who sits on two ethics boards at Google Cloud as chief executive officer of AI. responsible.

Judgments can be difficult.

Microsoft, for example, had to balance the benefit of using its voice mimicry technology to restore impaired people’s speech against risks like activating political deepfakes, said Natasha Crampton, chief AI officer. of the company.

Human rights activists argue that decisions with potentially important consequences for society should not be made only internally. They argue that ethics committees cannot be truly independent and that their public transparency is limited by competitive pressures.

Jascha Galaski, head of advocacy at the Civil Liberties Union for Europe, sees external oversight as the way to go, and US and European authorities are indeed setting rules for the nascent zone.

If corporate AI ethics boards “became really transparent and independent – and this is all very utopian – then it could be even better than any other solution, but I don’t think that’s realistic,” said Galaski.

Companies have said they would welcome clear regulations on the use of AI, and that this is essential for both customer and public confidence, as are car safety rules. They said it was in their financial interests to act responsibly as well.

However, they want all the rules to be flexible enough to keep up with innovation and the new dilemmas it creates.

Among the complex considerations ahead, IBM told Reuters that its AI ethics committee has started discussing how to control an emerging frontier: implants and wearable devices that connect computers to the brain.

Such neurotechnologies could help people with disabilities control their movements, but raise concerns such as the possibility of hackers manipulating thoughts, said Christina Montgomery, IBM’s chief privacy officer.

AI CAN SEE YOUR PAIN

Tech companies admit that just five years ago they were launching AI services like chatbots and photo tagging with few ethical guarantees, and were fighting abuse or biased results with updates subsequent.

But as political and public scrutiny of AI failures increased, Microsoft in 2017 and Google and IBM in 2018 created ethics boards to review new services from the start.

Google said it was faced with its dilemma when it comes to lending money last September when a financial services company found that AI could assess people’s creditworthiness better than other methods.

The project seemed well suited to Google Cloud, whose expertise in developing AI tools that help in areas such as abnormal transaction detection attracted clients like Deutsche Bank (DBKGn.DE), HSBC (HSBA .L) and BNY Mellon (BK.N).

The Google unit predicted that AI-based credit scoring could become a market worth billions of dollars a year and wanted to take hold.

However, its ethics committee of around 20 managers, social scientists and engineers reviewing potential deals voted unanimously against the project at a meeting in October, Pizzo Frey said. .

The AI ​​system would need to learn from past data and models, the committee concluded, and thus risk repeating discriminatory practices around the world against people of color and other marginalized groups.

Additionally, the committee, known internally as “Lemonaid”, has adopted a policy of ignoring all credit-related financial services transactions until these issues can be resolved.

Lemonaid had rejected three similar proposals over the previous year, including those from a credit card company and a commercial lender, and Pizzo Frey and his sales counterpart were eager to get a broader decision on the issue. question.

Google also said that its second Cloud Ethics Board, known as Iced Tea, this year reviewed a service released in 2015 to categorize photos of people according to four expressions: joy, sadness, anger and surprise. .

The move follows a ruling last year by Google’s corporate ethics committee, the Advanced Technology Review Council (ATRC), barring new emotion-related services from reading.

The ATRC – more than a dozen senior executives and engineers – has determined that inferring emotions can be insensitive, as facial cues are associated differently with feelings across cultures, among other reasons, said Jen Gennai, founder and head of Google’s responsible innovation team.

Iced Tea has blocked 13 emotions planned for the cloud tool, including embarrassment and contentment, and may soon abandon the service altogether in favor of a new system that would describe movements such as frowning and smiling, without looking. to interpret them, Gennai and Pizzo Frey mentioned.

VOICES AND FACES

Microsoft, meanwhile, developed software that could reproduce someone’s voice from a short sample, but the company’s Sensitive Uses panel then spent more than two years debating the ethics. around its use and consulted with company president Brad Smith, senior AI official Crampton told Reuters.

She said the panel – specialists in fields such as human rights, data science and engineering – finally gave the green light for the full release of Custom Neural Voice in February of this year. But it has imposed restrictions on its use, including having subjects’ consent verified and having a team with “responsible AI fields” trained in company policy approve purchases.

IBM’s board of directors on AI, made up of some 20 department heads, faced its own dilemma when, at the start of the COVID-19 pandemic, it considered a request for a client to customize facial recognition technology to detect fevers and face coverings.

Montgomery said the board, which she co-chairs, declined the invitation, concluding that manual checks would suffice with less intrusion on privacy, as the photos would not be kept for any of the company’s databases. IA.

Six months later, IBM announced that it was stopping its facial recognition service.

AMBITIONS NOT ACHIEVED

In an effort to protect privacy and other freedoms, lawmakers in the European Union and the United States are pursuing far-reaching controls over AI systems.

The EU’s artificial intelligence law, due to be passed next year, would ban real-time facial recognition in public spaces and force tech companies to review high-risk apps, such as those used to hiring, credit scoring and law enforcement. Read more

US Congressman Bill Foster, who has held hearings on how algorithms advance discrimination in financial services and housing, said new laws governing AI would ensure a level playing field for providers .

“When you ask a company to reduce its profits to achieve its societal goals, it replies: ‘What about our shareholders and our competitors? “That’s why you need sophisticated regulation,” said the Illinois Democrat.

“There may be areas that are so sensitive that you will see tech companies deliberately staying away until there are clear traffic rules.”

Indeed, some advances in AI can simply be put on hold until companies can counter the ethical risks without spending huge engineering resources.

After Google Cloud rejected the request for personalized financial AI last October, the Lemonaid committee told the sales team that the unit aims to someday start developing credit-related applications.

First, research on tackling unfair bias must catch up with Google Cloud’s ambitions to increase financial inclusion through “highly sensitive” technology, he said in the policy released to staff.

“Until then, we are unable to deploy solutions. “

Reporting by Paresh Dave and Jeffrey Dastin; Editing by Kenneth Li and Pravin Char

Our Standards: Thomson Reuters Trust Principles.

[ad_2]

Source link