Predictim application for babysitter: Facebook and Twitter go to action



[ad_1]

The app badyzes social media accounts for publications that might worry parents

Copyright of the image
Getty Images

Legend

The app badyzes social media accounts for publications that might worry parents

An app that claims to control babysitters is currently being investigated by Facebook and has been blocked by Twitter.

Predictim, based in California, offers a service that badesses the social networking activity of a potential babysitter to get a score out of five indicating the degree of security of their presence.

He looks for articles on drugs, violence other unwanted content. Critics say that one should not trust algorithms to give advice on the employability of a person.

Earlier this month, after discovering the activity, Facebook has canceled most of Predictim's access to users, believing that the company was violating its rules for using personal data.

Facebook is currently studying the possibility of totally blocking the company from its platform after Predictim said that it was still removing public data from Facebook in order to power its algorithms.

"Everyone looks at people on social media, they do it on Google," said Sal Parsa, general manager and co-founder of Predictim.

"We are automating this process."

Facebook did not see it that way.

"Clearing people's information on Facebook goes against our terms of use," a spokeswoman said.

"We are going to investigate Predictim for violations of our conditions, including to see if they engage in scraping."

At the same time, Twitter told the BBC that it had "recently" decided to block Predictim's access to its users.

"We strictly prohibit the use of Twitter data and APIs for monitoring purposes, including during background checks," an e-mail spokesperson said. "When we learned about Predictim's services, we conducted a survey and revoked their access to Twitter's public APIs."

An API – application programming interface – allows different software to interact. In this case, Predictim would use the Twitter API to quickly badyze a user's tweets.

Legal issue

Predictim, which was funded by a program set up by the University of California, attracted considerable attention over the weekend thanks to a front-page article from The Washington Post. In this document, the experts cautioned against the possibility that algorithms misinterpret messages.

Jamie Williams, of the Electronic Frontier Foundation, told the newspaper, "The kids have a joke inside, they are notoriously sarcastic, something that could look like a" bad attitude "to something. Regarding the algorithm might seem to someone else as a valid political statement or criticism. "

Predictim stated that its system included a human examination element, which meant that messages reported as troublesome were examined manually to avoid false negatives. In addition to references to criminal behavior, Predictim claims to be able to identify cases "when an individual exhibits a lack of respect, esteem or courteous behavior".

The company presented the BBC with a demo dashboard showing how users could see specific social media posts flagged as inappropriate for their own judgment.

The service will charge $ 25 to badyze an applicant's social media profiles, with discounts for multiple badyzes. The company said it was discussing with leading companies the "shared economy" for drivers or hosters.

"This is not the magic of the black box," Parsa said. "If the AI ​​reports that an individual is abusive, there is evidence of why that person is abusive."

The firm emphasizes that it is not a tool designed to make hiring decisions, and that the rating is only a guide. However, on the dashboard of the site, the company uses phrases such as "this person is very likely to display undesirable behavior (high probability of being a bad hiring)". Elsewhere on the dummy dashboard, the person in question is flagged as "very high risk".

Mr. Parsa pointed out a limitation of liability at the bottom of the page: "We can not provide any guarantee as to the accuracy of the badysis contained in the report or the relevance of the subject matter of the report. this report for your needs. "

The legality of companies that extract data from public social networks without the consent of the sites in question is currently being examined by the courts.

The professional networking site LinkedIn is currently stuck in front of American call courts with HiQ, a service that used publicly available LinkedIn data to create its own database. A lower California court had already ruled in favor of HiQ's use of the data.

________

Follow Dave Lee on Twitter @DaveLeeBBC

Do you have more information about this or any other technology story? You can reach Dave directly and securely via the encrypted email application. Report it: +1 (628) 400-7370

[ad_2]
Source link