[ad_1]
According to a study published today in PLOS ONE, it only takes a small number of fake news to disrupt debates or discussions on an issue.
But there is a way to discourage those who spread false news, and even eliminate it completely.
The research is experimental, based on modeling and simulations, but it shows at least that it is possible to counteract the spread of misinformation.
The rise of false news
The spread of malicious and false information has affected human societies for centuries.
PUBLICITY
In this era of instant global digital connectivity, the current incarnation of "false news" has become a scourge and is exploited for personal or political purposes.
Social media, designed to encourage users to contribute and share content, has become the major facilitator of spreading false news.
As nations meddle in the politics of democracies and political parties that attempt to manipulate public opinion into a profit-driven "false news" industry, they have all exploited this diffusion to their advantage, causing confusion and confusion. discord among the victim populations they targeted.
The simulation game
We conducted experiments to understand the more fundamental mechanisms that determine the behavior of false information within populations.
We were particularly interested in two questions:
- what impact of false news may have on the formation of a consensus in a population
- the impact of the cost of distributing false news on its ability to infest a population.
In the real world, costs can be external, such as fines, penalties, exclusions, expenses of creating and distributing counterfeits; or they may be internal, such as feelings of loss or embarrbadment due to ridicule or shame.
The tool used was an evolutionary simulation, in which simple software robots of a population interacted playing the famous game Priliner's Dilemma. Basically, a prisoner who betrays another wins big, while the betrayal loses badly, while both win only modestly if they cooperate and suffer in the same way if they betray themselves.
Unlike previous work in this area, we have made some of these software robots a little devious by adding code that allowed them to delude themselves. The victim of such a trick is confused as to the intent of the opposing player or is convinced that the latter is a "good guy" who cooperates selflessly.
Our code has exploited our work on the theoretical modeling of information on deception, thus making it possible to map known illusions in models of game theory. Each deceiver of the simulation entailed a cost, which was then deducted from the gain he had gained in the game of the prisoner's dilemma.
How many false news to disrupt the consensus?
We found that even a very small percentage of misleading players in the population – less than 1% in our simulations – could catastrophically disrupt cooperative behaviors within the simulated population.
In extreme cases of gratuitous deception – where producers of false information are not hindered – cooperative behavior has completely disappeared. Cooperation survived only when the cost of disappointments was greater than zero. Where the costs were very high, the cooperation really flourishes.
We also found that for all simulations, the survivability of deceptive players depended very heavily on the cost of deception. If the cost was high enough, the fraudsters could not survive in the population.
By applying this to the dissemination of false information, very high costs will result in its extinction.
From the real world experience
What do these experimental results tell us about the real world of distributing false information in social and mbad media?
The first, and perhaps most important, result is that very little false news is needed to wreak havoc in a population and prevent the formation of a consensus, essential to public debate. Whether the victims are confused or believe in falsehood is irrelevant. It is their ability to reach a consensus that is disrupted.
Our modeling is focused on small groups of influencers who actively discuss issues. When influencers can not agree, subscribers can not in turn align with consensus. This is one of the reasons why false information is so destructive to democratic societies.
The second result of more general interest is that the attribution of a high cost to the production, but especially the distribution of false news can prove to be the most effective tool available to us to stop its propagation. A high societal investment in rising costs is worth it because the effects of misinformation are so disruptive.
Break the chain
More than a decade of information warfare research revealed that proxy transmission was a major factor in the spread of propaganda on toxic products.
For example, the media broadcasting images and violent images produced by terrorists acted as substitutes for the terrorists who produced the propaganda, whether they knew it or not.
Social media users who share fake news also act as proxies for producers of fake news. Such users are generally seen as victims of false information – which they are usually – but whenever they share false information, they become participants in the producer's hoax of false news.
Linking a cost to distributing fake news in social media is not easy. Informal distribution of usual fake news posters is an option that matches the evolutionary psychology of cheating detection.
Social media organizations such as Facebook say that they are trying to be more proactive in detecting false news and false news, whether through machine learning technology or by auditors. third parties, and that they have recently been successful.
But these two ideas come up against the more delicate problem of precisely determining what is false news or not. Unpleasant facts are too often called "false news".
The reliability and objectivity of fact checkers can vary considerably – ground truths are often obscured by biases and limitations in understanding.
Currently, contrary to claims by some social media providers, AI is unable to search for and eliminate false information, which is incumbent upon us again.
We can all help just by thinking a bit before loving, sharing or retweeting information on social media. Maybe do some research checks to see if the information is known to be true or false.
Pest management is an established practice in biological ecosystems and is clearly lagging behind the information ecosystem.
Carlo Kopp is a part-time academic at Monash University. He is a member of the Lean Systems Society, an badociate member of AIAA, a senior member of IEEE and co-founder of the Air Power Australia Strategy Focus Group.
Kevin Korb has received funding from the ARC, NHMRC and IARPA. He is a member of the IEEE, Civil Liberties Australia and the Australian Greens. His university home is Monash University.
This article was originally published on The Conversation. Read the original article.
[ad_2]
Source link