A study revealed that Twitter robots had a "disproportionate" role in spreading erroneous information in the 2016 election



[ad_1]

The spread of an article claiming that 3 million illegal immigrants had voted in the 2016 US presidential election. The links show the spread of the article through retweets and tweets cited, in blue, and answers and mentions, in red. Credit: Filippo Menczer, University of Indiana

An analysis of information shared on Twitter during the 2016 US presidential election revealed that automated accounts – or "bots" – played a disproportionate role in spreading fake information online.

The study, conducted by researchers from the University of Indiana and published November 20 in the journal Nature Communications, analyzed 14 million messages and 400,000 articles shared on Twitter between May 2016 and March 2017, a period from the end of the primaries to the 2016 presidential election at the presidential inauguration of January 20, 2017.

Among the results: Only 6% of Twitter accounts that the study identified as being bots were sufficient to broadcast 31% of the "low credibility" information on the network. These accounts were also responsible for 34% of all items from "unreliable" sources.

The study also revealed that robots played a major role in promoting un credible content during the first few moments before a story became viral.

The brief duration of this time – 2 to 10 seconds – highlights the challenges of combating the spread of online misinformation. Similar problems arise in other complex environments like the stock market, where serious problems can occur in a matter of moments, due to the impact of high frequency trading.

"This study shows that robots contribute significantly to the spread of online misinformation – and also shows how fast these messages can spread," said Filippo Menczer, a professor at the computer institute. , computer and engineering of the IU, who led the study.

The analysis also revealed that bots amplify the volume and visibility of a message until it is more likely to be shared widely – although it represents only a small fraction of accounts that broadcast news. viral messages.

"People tend to trust more messages that seem to emanate from many people," said co-author Giovanni Luca Ciampaglia, an associate researcher at the IU Science Science Institute at the time of the study. "Robots take advantage of this trust by making the messages seem so popular that real people are likely to spread them for them."

Sources of information deemed to be unreliable in the study have been identified on the basis of their appearance on lists produced by third-party independent point-of-sale organizations that regularly share false or misleading information. These sources, such as websites with misleading names such as "USAToday.com.co", include outlets with views to the right and left.

Researchers have also identified other tactics to spread misinformation with Twitter robots. These included amplifying a single tweet – potentially controlled by a human operator – through hundreds of automated retweets; repeat links in recurring messages; and target very influential accounts.

For example, the study cites a case in which only one account mentioned @realDonaldTrump in 19 separate messages about millions of illegal immigrants who voted in the presidential election – a false statement that was also an important topic of discussion for the administration.

The researchers also conducted an experiment in a simulated version of Twitter and found that the removal of 10% of the system accounts (based on their likelihood of being bots) resulted in a significant drop in the number of users. articles from unreliable sources. in the network.

"This experience suggests that the elimination of social network robots would significantly reduce the amount of false information on these networks," Menczer said.

The study also suggests steps that companies could take to slow the misinformation spread on their networks. These include the improvement of algorithms to automatically detect robots and the need for a "human in the loop" to reduce automated messages in the system. For example, users may be asked to complete a CAPTCHA to send a message.

Although their analysis focused on Twitter, the study's authors added that other social networks are also vulnerable to manipulation. For example, platforms such as Snapchat and WhatsApp may struggle to control misinformation on their networks, as their use of encryption and destructible messages makes it difficult for them to study how their users share information.

"While people around the world are turning more and more to social media, which is their primary source of information and information, the fight against misinformation requires an in-depth assessment of how it works. the relative impact of its different ways of broadcasting, "said Menczer. "This work confirms that robots play a role in the problem and suggests that their reduction could improve the situation."

To explore the electoral messages currently shared on Twitter, the Menczer research group has recently launched a tool to measure the "Bot Electioneering Volume". Created by IU Ph.D. Students, the program displays the level of bot activity around election-specific conversations, as well as topics, usernames and hashtags that they currently push.


Explore further:
What is the trend in false news? Tools show what stories become viral and if "bots" are to blame

More information:
Chengcheng Shao et al, The diffusion of content not very credible by social robots, Nature Communications (2018). DOI: 10.1038 / s41467-018-06930-7

Journal reference:
Nature Communications

Provided by:
University of Indiana

[ad_2]
Source link