[ad_1]
Machine learning systems have been WhatsApp's main weapon for identifying users who spread fake news and create multiple profiles to commit crimes on the platform. According to information revealed Wednesday by the messenger, about two million accounts are banned each month, some even before being activated, precisely because of the artificial intelligence platform used by the company to fight against abuses . .
At a presentation in New Delhi, India, Matt Jones, one of WhatsApp's software engineers, joined with the company's executives to explain the recent the company's efforts to combat false information and hate speech. on the dock. According to him, several factors have led to the creation of an artificial intelligence that society has deemed appropriate and the poor experience of the messenger in the matter in recent years has paved the way for current attitudes.
WhatsApp must act before suspicious users begin their harmful activities. For this, the company has configured in its artificial intelligence common scenarios among users of false accounts, in order to identify standardized behaviors. The country of origin of the phone numbers and the IP address used (especially if they come from different countries) are also responsible for raising red flags on the platform. Accounts that start sharing texts shortly after they are created are also a sign of trouble.
Today, 75% of irregular profiles are prohibited without human intervention and even before starting to act on the platform. The rest appears in front of WhatsApp's eyes for the aforementioned irregular performance after creation or, in the latter case, as a result of self-denunciations made by the users themselves, with which the messenger processes both automated and remote systems. 39, a team dedicated to moderation.
False information, abuse and hate speech are not the only focus of the application, they also want to stop the spread of scams and phishing messages. Today, 1.5 billion people use WhatsApp in the world, a rare application case that has affected even the least educated users. This has made the messenger fertile ground for cybercrime, which the company does not want to see happen either.
The artificial intelligence system is badociated with other security features applied by WhatsApp in recent months, such as routing limitation. messages to up to five contacts and violations in the read receipt used to display status updates and photos without getting caught. All, of course, also appear in response to government pressure and worldwide investigations of misinformation, distribution of violent content, abuse and even pedophilia.
In India, this week's revelation drama, for example, a WhatsApp wave was cited as a vector when villagers from small towns across the country were accused of witchcraft and child abduction, making at least six dead. In other countries, including Brazil and the United States, the dissemination of false information was cited as an important factor of manipulation, including by foreign agents, in the last presidential elections.
In some countries, the company has also launched awareness campaigns. in partnership with the telephone operators and the administrators of advertising totems. The idea is to make users think about the validity of information shared by WhatsApp before transmitting it. India is one of the main focus of this action, not only because of the escalation of violence, but also elections scheduled for April, which will test the effectiveness of security features presented this week.
is confident, mainly because of the number of uses. According to WhatsApp data, 90% of conversations still take place between two users, while most groups have less than 10 members. The large message-sharing forums are still in the minority, which is good news for the messenger and facilitates his work of moderation.
Source: Venture Beat
[ad_2]
Source link