Microsoft has obtained a patent to transform you into a chatbot



[ad_1]

Illustration from article titled Microsoft Has Obtained a Patent to Turn You into a Chatbot

Photo: Stan honda (Getty Images)

What if the most significant measure of your life’s work had nothing to do with your lived experiences but simply your unwitting generation of a realistic digital clone of yourself, an ancient human specimen for the entertainment of the people of? the year 4500 long after you left that deadly reel? This is the least horrible question raised by a recently granted Microsoft patent for an individual chatbot.

First noticed by The independent, The United States Patent and Trademark Office confirmed to Gizmodo via email that Microsoft is not yet licensed to make, use, or sell the technology, only to prevent others from doing so. The patent application was filed in 2017 but was just approved last month.

Hypothetical chatbot You (considered in detail here) would be trained on “social data,” which includes public posts, private messages, voice recordings and video. It can take a 2D or 3D form. It can be a “past or present entity”; a “friend, a relative, an acquaintance, [ah!] a celebrity, a fictional character, a historical figure ”and, disturbingly,“ a random entity ”. (The last one, we might guess, could be a talking version of the library of machine-generated photorealistic portraits ThisPersonDoesNotExist.) Technology may allow you to register at a “certain phase of life” to communicate with yourself in the future.

Personally, I appreciate that my chatbot would be useless thanks to my limited textual vocabulary (“omg” “OMG” “OMG HAHAHAHA”), but the minds of Microsoft considered it. The chatbot can form opinions that you don’t have and answer questions that have never been asked. Or, in Microsoft’s words, “one or more conversational data stores and / or APIs may be used to respond to user dialogue and / or questions for which social data does not provide data.” Filler comments can be guessed through participatory data from people with aligned interests and opinions or demographic information such as gender, education, marital status, and income level. He can imagine your perspective on a problem based on “crowd perceptions” of events. “Psychographic data” is on the list.

In summary, we’re taking a look at Frankenstein’s machine learning monster, which brings the dead to life through uncontrolled and highly personal data collection.

“It’s scary,” Jennifer Rothman, professor of law at the University of Pennsylvania and author of The right to publicity: privacy rethought for a public world Gizmodo said by e-mail. If that reassures you, such a project looks like legal agony. She predicted that such technology could attract disputes regarding the right to privacy, the right to publicity, defamation, the misdemeanor of false light, trademark infringement, copyright infringement and false approval “to name a few,” she said. (Arnold Schwarzenegger traced the territory with this head.)

She continued:

It could also violate biometric privacy laws in states, like Illinois, that have them. Assuming that the collection and use of data is allowed and people opt in the affirmative to create a chatbot in their own image, there is still concern about technology if these chatbots are not clearly demarcated as imitators. One can also imagine a host of technology abuses similar to what we see with the use of deepfake technology – probably not what Microsoft would expect, but nonetheless it can be anticipated. Compelling but unauthorized chatbots could create national security concerns if a chatbot, for example, spoke on behalf of the president. And one can imagine that unauthorized celebrity chatbots could proliferate in ways that are sexually or commercially exploitative.

Rothman noted that while we do have realistic puppets (deepfakes, for example), this patent is the first she sees that combines such technology with data collected through social media. Microsoft can alleviate the concerns in several ways with varying degrees of realism and clear disclaimers. Playing as Clippy the trombone, she said, might help.

It’s unclear what level of consent would be needed to compile enough data, even for the most lumpy digital waxes, and Microsoft has not shared any guidelines for potential user agreements. But other likely laws governing data collection (the California Consumer Privacy Act, the EU’s General Data Protection Regulation) could take a look at chatbot creations. On the other hand, Clearview AI, which notoriously provides facial recognition software to law enforcement and private companies, is currently challenging its right to monetize its repository of billions of avatars scraped from public social media profiles without user consent.

Lori Andrews, a lawyer who contributed to the development of guidelines for the use of biotechnology, imagined an army of evil rogue twins. “If I ran for office, the chatbot might say something racist like it’s me and wipe out my election prospects,” she said. “The chatbot could access various financial accounts or reset my passwords (based on aggregated information such as a pet’s name or mother’s maiden name which is often accessed from social media ). A person could be misled or even hurt if their therapist took a two-week vacation, but a chatbot mimicking the therapist continued to provide and bill for services without the patient’s knowledge of the change. “

Hopefully that future will never come true and Microsoft has acknowledged the technology is scary. When asked to comment, a spokesperson asked Gizmodo to a tweet by Tim O’Brien, general manager of AI programs at Microsoft. “I’m looking into this – the application date (April 2017) is before the AI ​​ethics reviews we’re doing today (I’m on the panel), and I’m not aware of any construction / shipping plan (and yes, it’s disturbing). ”



[ad_2]

Source link