[ad_1]
Twitter is expanding its use of warning labels on tweets with misleading details about coronavirus vaccines.
The move, announced in a blog post Monday, is designed to bolster the social network’s existing Covid-19 guidelines, which has led to the deletion of more than 8,400 tweets and challenged 11.5 million accounts worldwide.
In December, the platform began providing more tags that provide additional context to tweets with disputed information about the pandemic. Now the company is focusing more on vaccine-specific publications and is launching a strike system that “determines when further enforcement action is needed”.
Twitter’s decision comes amid concern over the dissemination of anti-vaccination material on social media.
The labels will initially only be applied by humans, which will help automated systems detect violated content in the future. Users will not be faced with any further action after their first warning.
Two strikes will result in a 12-hour account lockout, with 12 extra hours added for a third offense. A seven-day account lockout will be imposed after four strikes, followed by a permanent suspension for five or more strikes.
The company is starting with content in English and says it will work to expand to other languages and cultural contexts over time.
“We believe the strike system will help educate the public about our policies and further reduce the spread of potentially harmful and misleading information on Twitter, especially for repeated violations of our rules in moderate and high ways,” said the society.
The change was a “step in the right direction,” said Lisa Fazio, an assistant professor at Vanderbilt University who studies the psychology of fake news.
“As always, the devil is in the details,” she said. “The success of the policy will depend on the consistency of its application, its precision and the proper functioning of the appeal process.”
Facebook, for example, has passed specific regulations on topics such as political disinformation and vaccines, but has been criticized for how it actually acted on that rule, with some accusing the company of refusing to take action. on conservative disinformation to avoid being seen as politically biased.
Under the new policy, users cannot report other users specifically for Covid disinformation, although such content is prohibited on the platform. Instead, users who believe a particular tweet is breaking corporate rules on Covid should report it for another violation – such as “threat of harm” – and use the text box to add that it This is prohibited disinformation.
Twitter’s new policies come after Facebook completely banned vaccine misinformation in early February, using a similar strike system that suspends users who post false claims and permanently removes those with multiple violations.
Facebook specifically targets pages and groups with the new guidelines, which are not specific to Covid-related content and will also target lies, including suggesting that vaccines cause autism – a baseless claim made by many members of the the anti-vax community.
Twitter, Facebook, and platforms such as Instagram and TikTok began adding links and tags to any information about Covid-19 at the start of the pandemic. On Facebook, Instagram, and TikTok, even posting the term “Covid-19” will receive a message along with a warning label and a link to specific information from the Centers for Disease Control and Prevention.
The prevalence of disinformation and the way it is handled on these platforms underscores the disproportionate influence of large private companies on public health issues, democracy and decisions regarding freedom of expression, said Gautam Hans, director of the Stanton Foundation First Amendment Clinic at Vanderbilt University.
“We should have democratic concerns about how these private companies have so much control to allow speech to happen or not without any sort of true democratic accountability, and how the current First Amendment doctrine thwarts a lot of laws.” or regulations on that front, ”says Hans.
[ad_2]
Source link