The ethic of smart devices that analyze our way of speaking



[ad_1]

summary

As smart assistants and voice interfaces become more widespread, we are giving in to a new form of personal data – our speech. It goes well beyond the words we say aloud. Speech is at the heart of our social interactions and we unintentionally reveal a lot about ourselves when we speak. When someone hears a voice, he immediately begins to perceive the accent and intonation and makes assumptions about age, training, personality, and so on. of the speaker. But what happens when machines start to analyze the way we talk? Big tech companies are reluctant to say exactly what they plan to detect and why, but Amazon has a patent that lists a range of features they might collect, including identity ("Sex, age, ethnicity, etc."), health ("Sore throat, illness, etc."), and feelings, ("Happy, sad, tired, sleepy, excited, etc."). This is disturbing because the algorithms are imperfect. And the voice is particularly difficult to analyze because the signals that we emit are incoherent and ambiguous. Moreover, the inferences that even humans make are distorted by stereotypes. In business, we are used to paying attention to what we write in emails, in case the information gets lost. We need to develop a similar attitude of mistrust towards sensitive conversations close to connected devices. The only really safe device in which to speak is the one that is off.

Carmen Martínez Torrón / Getty Images

As smart assistants and voice interfaces become more widespread, we are giving in to a new form of personal data – our speech. It goes well beyond the words we say aloud.

Speech is at the heart of our social interactions and we unintentionally reveal a lot about ourselves when we speak. When someone hears a voice, he immediately begins to perceive the accent and intonation and makes assumptions about his age, his education, his personality, and so on. Humans do this so that we can guess how best to respond to the speaker.

But what happens when machines start to analyze how we speak? Big tech companies are reluctant to say exactly what they plan to detect and why, but Amazon has a patent that lists a range of features they might collect, including identity ("Sex, age, ethnicity, etc."), health ("Sore throat, illness, etc."), and feelings, ("Happy, sad, tired, sleepy, excited, etc.").

It worries me – and it should also worry you – because the algorithms are flawed. And the voice is particularly difficult to analyze because the signals that we emit are incoherent and ambiguous. Moreover, the inferences that even humans make are distorted by stereotypes. Take the example of identifying sexual orientation. There is a style of speaking with a raised tone and cross intonations which, according to some people, signals a gay man. But confusion often occurs because some heterosexuals speak this way, and many homosexuals do not. Scientific experiments show that the human auditory "gaydar" is accurate only in 60% of cases. Studies on machines attempting to detect sexual orientation from facial images have shown a success rate of about 70%. Sounds impressive? Not for me because it means that these machines are wrong 30% of the time. And I predict that success rates will be even lower for the voices, because the way we speak changes according to the person we are talking to. Our vocal anatomy is very flexible, which allows us to be oral chameleons, unconsciously changing our voices to better integrate with the person we are talking to.

Insight Center

We should also be concerned about companies that collect imperfect information about the other features mentioned in Amazon's patent, including gender and ethnicity. The examples of speech used to train machine learning applications will help to learn the biases of society. This has already been seen in other similar technologies. Type the Turkish "O bir hemşire. O bir doctor "in Google Translate and you will find" She is a nurse "and" He is a doctor ". Despite the fact that "o" is a neutral third-person pronoun in Turkish, the presumption that a doctor is a man is a man. the nurse is a woman because the data used to form the translation algorithm is distorted by sexist prejudices in medical jobs. Such problems also extend to race, a study showing that in typical data used for machine learning, African American names are used more often alongside unpleasant terms such as "hate" , "Poverty," "ugly," rather than European American names. which tends to be more often used with pleasurable words such as "love", "lucky", "happy".

Large technology companies want voice devices to work better, which means understanding How things are said. After all, the meaning of a simple phrase such as "I'm fine" changes completely if you move from neutral to angry. But where will they draw the line? For example, a smart assistant who detects anger can potentially begin to understand how you get along with your spouse by listening to the tone of your voice. Will Google then start displaying marital consultation ads when it detects a difficult relationship? I'm not saying that someone will deliberately do that. The peculiarity of these complex machine learning systems is that these types of problems usually arise unexpectedly and unexpectedly. Among the other errors that the AI ​​might commit, let us mention the detection of a strong accent and inference as a result, which means that the speaker is less educated because the training data has been distorted by stereotypes of society. This could then bring an intelligent speaker to respond unabridged to those with a strong accent. Technology companies need to better understand how to avoid such prejudices in their systems. There are already worrying examples of voice analytics being used on phone lines to allow claimants to detect potential false claims. The British government wasted 2.4 million pounds sterling for a lie detection system that was scientifically incapable of functioning.

A final problem is that many people seem to be more careless around these devices. Amazon has already noted that many people had real conversations with Alexa and often told their device what they felt, even going as far as professing their love of technology: "Alexa, I 't get it. love". Add the word to a device agency suggests, making it more likely that we will anthropomorphize technology and feel safe revealing sensitive information. It is probably only a matter of time before a serious voice data security breach occurs. For this reason, researchers are just beginning to develop algorithms to try to filter sensitive information. For example, you can set the device to mute the microphone when you mention the name of your bank to prevent you from accidentally revealing access details or if you mention words from sexual nature.

What is the attitude of consumers towards privacy with regard to intelligent assistants? The only published study I could find on this subject comes from the University of Michigan. He showed that technology owners are not so keen on giving more data to gatekeepers such as Google and Amazon. "I find this really disturbing," said one of the authors of the study, Florian Schaub. "These technologies are gradually reducing people's expectations of privacy. Current privacy controls simply do not meet the needs of users. Most interviewees did not even know that the data was analyzed for targeted advertising. When they discovered this, they did not like their voice commands being used. way.

But consumers can also subvert technology for their own purposes. In the University of Michigan study, a person looked at the audio diaries of his Amazon Echo to see what homekeepers were doing with the technology. These devices can also open new avenues of persuasion in the future. If you think that your washing machine needs to be replaced, but your partner does not agree, do a voice search to look for possible models near the smart speaker, and your wife may be bombarded by countless advertisements for news.

In business, we are used to paying attention to what we write in emails, in case the information gets lost. We need to develop a similar attitude of mistrust towards sensitive conversations close to connected devices. The only really safe device in which to speak is the one that is off.

[ad_2]

Source link