When artificial intelligence knows too much (or too little) about you



[ad_1]

"Know thyself."

That's what philosopher, historian and bestselling author Yuval Noah Harari said during a talk on stage at Stanford University last week. The prolific writer (and agitator) has long criticized the applications of artificial intelligence that follow, aggregate and learn from each of our movements, gleaning information about us that we are sometimes unaware of ourselves.

"Know you better," Harari said in more modern English, "because now you have competition."

The Stanford Institute of Human-Focused Artificial Intelligence, which aims to develop technologies for the benefit of humanity, co-sponsored this event. The conversation also brought together computer professor Fei-Fei Li, pioneer of AI research and co-director of the multidisciplinary institute. Both speakers focused on what the future holds for AI and how it could be used to "support rather than subvert" human interests. It is not surprising that Harari and Li did not always find the best way to go, nor the extent and severity of the damage that AI could cause.

One of Li's suggestions, for example, was to develop AI systems that could explain their processes and decisions. But Harari argued that these technologies have become too complex to be explained and that this level of complexity can undermine our autonomy and authority.

Although the conversation was mainly fruitful and productive, there were some friendly jabs.

"I am very envious of philosophers because they can come up with questions and crises, but they do not have to answer them," said Li. (Even Harari chuckled.)

Perhaps Harari's laughter was because he knew he was about to come up with solutions, even though his simplest formula – "know yourself" – is easier said than done. to do. An anecdote shared by the philosopher himself illustrates the challenge we have to know ourselves better than can AI systems. Harari told the public that he had not realized that he was gay until the age of 21. "I'm with myself 24 hours a day," he said. Still, an AI system could have concluded his sexual identity faster than him.

"What does it mean to live in a world where you can learn something as important about yourself through an algorithm?", He asked the audience. "And if this algorithm does not share [this information] with you but with others, advertisers or an authoritarian regime?

The risks of AI knowing too much The remarks that concern us are real and begin to be addressed, both by outside critics like Harari and, increasingly, by engineers, educators and other insiders like Li. But what about there risks of the setback, when the AI ​​systems know too small about us, or the entire demographics?

Also last week, I attended Women Transforming Technology, an event held on the campus of the VMware technology company in Palo Alto. Joy Buolamwini, researcher at the Massachusetts Institute of Technology's Media Lab, discussed bias issues in AI applications. Buolamwini's work has focused on the inability of face recognition systems to accurately identify women's faces and, to a much greater extent, people of color. As you can probably guess, these systems often have trouble recognizing the faces of women of color.

"These are the sub-sampled majority of the world – women and people of color, "Buolamwini told his audience.

The bias in many facial recognition applications starts with the datasets used to form these AI systems. According to Buolamwini, the vast majority of images introduced into these self-learning systems are male and white subjects. The benchmarks used to assess accuracy on these systems are therefore also optimized for white and male faces. This has broad and potentially dangerous implications: imagine an autonomous vehicle that does not detect a dark-skinned person as precisely as he can "see" a fair-skinned person.

These are the kinds of risks that drove Buolamwini to create the Algorithmic Justice League, an organization designed to highlight and reduce bias in AI systems. The "collective", as the researcher calls it, brings together coders, activists, regulators and others to raise awareness of these important technological and societal issues.

Buolamwini's work at probably leads to improvements. During her speech, she pointed to the recent increase in the accuracy of facial recognition from IBM, Facebook and other companies, allowing to detect non-white and non-male subjects. But here's the problem: While Buolamwini clearly calls for further improvement of these systems, she is also very concerned about the application of facial recognition technologies that make find out more about all people.

"You can get accurate face recognition and put it on some drones, but it may not be the world you want to live in," Buolamwini told me after an interview afterwards. the speech.

Buolamwini gave another example: if a system is biased and used for law enforcement purposes, you can not justify the use of this system. Now, let's say you've corrected this bias. So, the question becomes, in Buolamwini's words, "Do we want to live in a state of mass surveillance?"

It's a question I'm pretty sure Buolamwini, Harari and Li would answer in the same way: No, no.

[ad_2]

Source link