Microsoft attacks social media like Facebook and YouTube sued and threatened with jail



[ad_1]
<div _ngcontent-c14 = "" innerhtml = "

Getty

In the wake of Christchurch, calls for stricter regulation of social media took a serious turn Wednesday, with the threat of a prison sentence for leaders who do not effectively control their platforms, and even Microsoft President Brad Smith speaking clearly. & Nbsp; "Is there a basic level of standards of decency or civilization to which we will ask that these networks or platforms be linked?" he asked after discussing events with Jacinda Ardern earlier in the week.

Well, are there?

Facebook and its peers have admitted that they can neither control nor control what is "published" on their platforms. And Monday's news that Facebook still allows hatred of neo-Nazis to be published even after Christchurch aggravated the situation. This week, a Facebook spokesperson told me that "We want Facebook to be a safe place and we will continue to invest to prevent damage, terrorism and hate speech from the platform." Time to discover.

That being said, it now seems that legislators could ensure that this happens before companies are able to justify their actions. Australia has become the first post-Christchurch country to threaten to jail social network managers who can not control their platforms. & Nbsp;"If social media companies do not demonstrate a willingness to immediately implement changes to prevent the use of their platforms," ​​Premier Scott Morrison said Tuesday: "like what was filmed and shared by the perpetrators of the terrible crimes at Christchurch, we will act. "

The challenge of social media is that they can not control the breadth of content on their platforms. A repetition of Christchurch would produce the same inability to control events. Nothing has changed. And if you do not believe it, just look at the headlines over the last twenty-four hours about social media's rejection of eradicating extremist hatred even after Christchurch.

And has the tipping point been reached?

Sunday, I suggested that Admission of Facebook that the company could not control Facebook & nbsp; Live could mean the end of live streaming on the platform. A few days ago, it might have seemed extreme. But not anymore.

On MondayThe French Council for Muslim Worship (CFCM) has announced that it will take legal action against& nbsp; Facebook and YouTube for inciting violence by broadcasting live images of Christchurch. The Federation of Islamic Associations of New Zealand (FIANZ) is pleased with this action. "They failed a lot of time, it was a person who was looking for an audience," a spokesman said about Facebook, "you have been the platform that he chose to make advertising for his heinous crime. "

And then, it was learned that Austalia was considering criminal prosecution that could result in jail time for social network managers who do not control what is being broadcast on their platforms. Prime Minister & nbsp;Morrison met with leading social media companies, including Facebook, Twitter and Google, on Tuesday to ask how they would prevent their platforms and services from being "armed" by terrorists.

If companies "can send you an advertisement in half a second," Morrison told reporters before their meeting, "they should be able to remove this type of terrorist material and other types of very dangerous material in the same time frame and apply their great abilities to the real challenges of Australian security. "

Microsoft Cue and the company's warning to social media at an event in Australia. "It was time to think that these platforms looked like the postal service with no responsibility, even legally, for what is in a letter – I think those days are gone," Brad Smith said. "In the world of social media, you would never see [some of the content shared] pbad like a radio station or a television network because they are almost exclusively devoted to hate. "

The day everything changed

Despite the criticisms made by the platforms not to suppress the extremist content, it is a problem easier to solve than the live broadcast. The immediacy and magnitude of these services make their retention currently impossible – that is the challenge that Facebook has recognized. Allowing a user to broadcast around the world, in the hope of being able to detect anything that is harmful or dangerous in real time, has proven unworkable.

Facebook has suffered the consequences of negative social media reactions after the events in Christchurch. The company reported thatThe attack was viewed less than 200 times in real time, but by 4,000 other people before removing the images from the site. The company also said it removed 1.5 million downloads.

Meanwhile, a YouTube spokesperson said the guardian This "volume of badociated videos uploaded to YouTube in the 24 hours that followed the attack was unprecedented in scale and speed – sometimes as fast as a new download by the second. & nbsp; In response, we took a number of steps, including the automatic rejection of any footage, of violence, temporarily suspending the ability to sort or filter searches by upload date and ensuring that research conducted on this event generates results from authoritative news sources.

There is too much content in general, but not enough odious content, in particular, to properly train their AI. And relying on users to report violations in real time has proven to be totally inadequate. Every failure has failed with Christchurch. & Nbsp;

"Many people have asked why artificial intelligence does not automatically detect video of last week's attack," a & nbsp; Facebook blog post sought to explain. "Artificial intelligence systems are based on" learning data, "which means that you need several thousand examples of content in order to form a system capable of detecting certain types of text, images or videos. "

In principle, if there are not enough attacks, artificial intelligence can not detect an attack. & Nbsp; That's why reliability depends on moderators or user reports, but "reThroughout the live broadcast, we have not received any user reports, "Facebook admitted. Unfortunately, all that results in: & nbsp;"We can not detect videos and we do not receive reports."

The bubble bursts

Over the last few days, calls for social media regulation have moved from side-to-side headlines. It is probably inevitable now that significant changes will occur and that those in charge of the self-regulated social media bubble will no longer be criticized by administrators focusing solely on user growth and share prices.

The live broadcast seems to be the testing ground of what will happen next. The immediacy and breadth of the content made its content incomprehensible. And with the simple badumption that it's & nbsp;harm to the public interest to provide a platform for dissemination to extremists, murderers, vulnerable, suicidal, when this platform can not be controlled, there is no public interest reason to let as is. & nbsp;

All roads clearly lead to regulation. And events move quickly.

">

As a result of Christchurch, claims for tighter regulation of social media took a serious turn Wednesday, with the threat of jail for leaders who do not effectively control their platforms and even the president of Microsoft, Brad Smith, speaking. "Is there a basic level of decency or civilization norms to which we will ask that these networks or platforms be linked?" he asked after discussing events with Jacinda Ardern earlier in the week.

Well, are there?

Facebook and its peers have admitted that they can neither control nor control what is "published" on their platforms. And Monday's news that Facebook still allows the publication of neo-Nazi hatred, even after Christchurch has worsened the situation. This week, a Facebook spokesperson told me that "We want Facebook to be a safe place and we will continue to invest to prevent damage, terrorism and hate speech from the platform." Time to discover.

That being said, it now seems that legislators could ensure that this happens before companies are able to justify their actions. Australia has become the first post-Christchurch country to threaten to jail social media executives who can not control their platforms. "If social media companies fail to demonstrate their willingness to immediately implement changes to prevent the use of their platforms," ​​said Tuesday Premier Scott Morrison, "like what was filmed and shared by the perpetrators of the terrible crimes of Christchurch, we will act. "

The challenge of social media is that they can not control the breadth of content on their platforms. A repetition of Christchurch would produce the same inability to control events. Nothing has changed. And if you do not believe it, just look at the headlines of the last twenty-four hours about the social media's refusal to eliminate extremist hatred even after Christchurch.

And has the tipping point been reached?

On Sunday, I suggested that Facebook's admission that the company could not control Facebook Live could mean the end of live streaming on the platform. A few days ago, it might have seemed extreme. But not anymore.

On MondayThe French Council for Muslim Worship (CFCM) has announced that it will take legal action against Facebook and YouTube for incitement to violence by live images of Christchurch. The Federation of Islamic Associations of New Zealand (FIANZ) is pleased with this action. "They failed a lot, it's a person who was looking for an audience," said a spokesman, referring to Facebook. "You are the platform that he has chosen to advertise his heinous crime."

And then, it was learned that Austalia was considering criminal prosecution that could result in jail time for social network managers who do not control what is being broadcast on their platforms. Prime Minister Morrison met with leading social media companies, including Facebook, Twitter and Google, on Tuesday to ask how they would prevent their platforms and services from being "armed" by terrorists.

Before meeting, companies "may send you an advertisement in half a second", "should be able to remove this type of terrorist material and other types of very dangerous material within the same time and apply their great abilities to the real challenges to ensure the safety of Australians ".

Microsoft Cue and the company's stark warning to social media at an event in Australia. "The time when we thought these platforms looked like the postal service without any liability, even legally, of what is in a letter – I think these days are over," said Brad Smith. "In the world of social media, you would never see [some of the content shared] pbad like a radio station or a television network, because they are almost exclusively devoted to the propaganda of hate. "

The day everything changed

Despite the criticisms made by the platforms not to suppress the extremist content, it is a problem easier to solve than the live broadcast. The immediacy and magnitude of these services make their retention currently impossible – that is the challenge that Facebook has recognized. Allowing a user to broadcast around the world, in the hope of being able to detect anything that is harmful or dangerous in real time, has proven unworkable.

Facebook has suffered the consequences of negative social media reactions after the events in Christchurch. The company reported thatThe attack was viewed less than 200 times in real time, but by 4,000 other people before removing the images from the site. The company also said it removed 1.5 million downloads.

Meanwhile, a spokesman for YouTube said at the guardian "The volume of badociated videos uploaded to YouTube in the 24 hours that followed the attack was unprecedented in scale and speed – sometimes as fast as a new download per second. In response, we took a number of steps, including automatic rejection of violence, temporarily suspending the ability to sort or filter searches by download date, and ensuring that searches for this event generate results from authoritative news sources.

There is too much content in general, but not enough odious content, in particular, to properly train their AI. And relying on users to report violations in real time has proven to be totally inadequate. Every failure has failed with Christchurch.

"Many people have asked why artificial intelligence does not automatically detect the video of the attack last week," explained a blog on Facebook. "Artificial intelligence systems are based on" learning data ", which means that you need several thousand examples of content in order to form a system capable of detecting certain types of text, images or videos. "

Basically, if there are not enough attacks, the AI ​​can not detect an attack. And so the trust lies in moderators or user reports, but "dFor the entire live broadcast, we have not received any user reports, "acknowledged Facebook. Unfortunately, all that results in: "We can not detect videos and we do not receive reports."

The bubble bursts

Over the last few days, calls for social media regulation have moved from side-to-side headlines. It may be inevitable now that significant changes will occur and that criticisms focused solely on the self-regulated social media bubble can no longer be criticized by administrators focused solely on user growth and share prices.

The live broadcast seems to be the testing ground of what will happen next. The immediacy and breadth of the content made its content incomprehensible. And with the simple badumption that it is detrimental to the public interest to provide a platform for dissemination to extremists, murderers, the vulnerable, suicidal, where this platform can not be controlled, there is no case of public interest justifying the departure in l & # 39; state.

All roads clearly lead to regulation. And events move quickly.

[ad_2]
Source link