[ad_1]
Alexa Cortana. Google Assistant. Bixby. Siri. Hundreds of millions of people use voice assistants developed by Amazon, Microsoft, Google, Samsung and Apple every day, and that number continues to grow. According to a recent study by the Voicebot technology publication, 90.1 million US adults use a voice assistant on their smartphone at least once a month, while 77 million use it in their car and 45.7 million in mobile phones. smart speakers. Juniper Research predicts that the use of voice assistants will triple, from 2.5 billion assistants in 2018 to 8 billion in 2023.
What most users do not realize is that the recordings of their voice requests are not immediately erased. Instead, they can be stored for years and in some cases they are scanned by reviewers for quality assurance and feature development. We asked the main actors of the voice assistant how they handled data collection and review, and we analyzed their privacy policies for more clues.
Amazon
Amazon claims to annotate a "very small sample" of Alexa voice recordings to enhance the customer experience, for example to train speech recognition and natural language comprehension systems. [that] Alexa can better understand … the requests. It uses third parties to review these records, but it has "strict technical and operational safeguards" to prevent abuse and these employees do not have direct access to identification information. . account numbers, first names and serial numbers of the device.
"All information is treated with great confidentiality and we use multi-factor authentication to limit access, encryption of services and audits of our control environment to protect it," said a spokesman. In a statement.
In the web and application settings pages, Amazon offers users the ability to disable voice recordings for feature development. Users who choose not to participate, he adds, can still have their records manually scanned during the review process.
Apple
Apple discusses the audio review process recorded by Siri in a white paper on his privacy page. In this paper, he explains that human "binders" examine and label a small subset of Siri data for development and quality assurance purposes, and that each reviewer ranks the quality of responses and indicates the correct actions. These labels feed recognition systems that "continually" improve Siri's quality.
Apple adds that the statements reserved for the revision are encrypted, anonymized and not associated with the names or identities of the users. In addition, human reviewers do not receive random user IDs (which are refreshed every 15 minutes). Apple stores these voice recordings for a period of six months, during which time they are scanned by Siri's recognition systems to "better understand" the voice of users. And after six months, copies are saved (without identifiers) for use in the improvement and development of Siri for two years.
Apple allows users to completely opt out of Siri or use the "Tap to Siri" tool only for local searches typed or verbalized on the device. However, a "small subset" of recordings, transcripts and associated data without an identifier can continue to be used for continuous improvement and quality assurance of Siri beyond two years .
A Google spokesperson told VentureBeat that he was doing "a very limited fraction of audio transcription to improve voice recognition systems," but that he was applying "a wide range of techniques to protect the privacy of users" . Do not associate any personally identifiable information, and this transcription is largely automated and is not managed by Google employees. In addition, in cases where Is use a third-party service to review the data, she says that "usually" provides the text, but not the audio.
Google also indicates that it is moving towards techniques that do not require human labeling, and it publishes research for this purpose. In the field of speech synthesis (TTS), for example, his Tacotron 2 system can create speech synthesis models based solely on spectrograms, while his WaveNet system generates models from waveforms.
Google stores audio clips recorded by the Google Assistant indefinitely. However, like Amazon and Apple, it allows users to permanently delete these records and to refuse future data collection, to the detriment of a sterilized wizard and of course a voice search experience. That said, it should be noted that in its privacy policy, Google states that it "may retain service information" to "prevent spam and abuse" and to "improve [its] services."
Microsoft
When we asked for our feedback, a representative from Microsoft told us a support page describing his privacy practices regarding Cortana. The page says that she collects voice data for "[enhance] Cortana's understanding of each user's speech habits and to "continue to improve" Cortana's recognition and responses, as well as to "improve" other products and services that utilize speech recognition and understanding of the intention.
The page does not specify whether Microsoft employees or subcontractors perform manual reviews of this data and how this data is anonymized, but the company indicates that, when the "Hey Cortana" feature still listening is enabled on compatible laptops and computers, Cortana only collects voice input after hearing its prompt.
Microsoft allows users to disable voice data collection, personalization, and speech recognition by viewing an online dashboard or search page in Windows 10. As expected, disabling the voice recognition prevents Cortana from responding to statements. But like Google Assistant, Cortana recognizes the orders entered.
Samsung
Samsung did not immediately respond to a request for comment, but the FAQ page on its Bixby Support Web site describes how it collects and uses voice data. According to Samsung, it uses commands and voice conversations (as well as information about operating system versions, device configurations and settings, IP addresses, device IDs, and other unique identifiers. ) to "enhance" and personalize product experiences, and to exploit conversation history, help Bixby better understand pronunciations and speech patterns.
At least some of these "enhancements" come from an undisclosed "third-party service" that provides speech-to-text conversion services in accordance with Samsung's privacy policy. The company notes that this provider can receive and store certain voice commands. And although Samsung does not specify how long it stores orders, it does indicate that its retention strategies take into account "status rules".[s] limitations 'and' at least the duration of [a person’s] use 'of Bixby.
You can delete Bixby conversations and recordings via the Bixby Home app on Samsung Galaxy devices.
[ad_2]
Source link