YouTube and Facebook showed that they needed regulation. There is a good old way to do it.



[ad_1]

Radio tours with the company logos of Facebook, YouTube and Twitter.

Photo illustration by Slate. Pictures of Getty Images Plus.

This racket that you may have heard earlier this week is the sound of YouTube that propels it.

On Tuesday night, the video platform announced that it would not act against one of its users, the talk show host Steven Crowder right in response to the complaints of another journalist, the journalist Vox, Carlos Maza. Maza had tweeted a super-cut Crowder, which has 3.8 million subscribers, has repeatedly insulted Maza using homophobic and racist language. Maza also showed that many Crowder fans had subsequently harassed him on social media. After several days, YouTube finally responded, saying that it had conducted a thorough review of the videos in question and determined that they had not violated its anti-hate speech policy.

On Wednesday morning, YouTube announced that, as part of a tougher anti-hate speech policy, it would remove thousands of videos promoting Nazism, white supremacy and other hateful ideologies that support the hate speech. 39 idea that one group of people is superior to another. Then, that afternoon, the platform decided to deprive Crowder's YouTube channel of its ability to make money with its ads. First, in response to Maza, he said that it was because of a homophobe Crowder t-shirt sold through the store on his website; then he said that the demonetization was caused by other violations of YouTube's policies on his channel. But YouTube never said how and with what content Crowder broke his rules. Instead, the company seemed to just keep making decisions, letting us know whenever it changed.

This incident has revealed many things to different people. Conservatives of Ben Shapiro at Senator Ted Cruz saw more evidence of a Silicon Valley giant curbing conservative rhetoric to soften its more liberal users. Maza supporters protested against YouTube's indecision amid a flagrant case of hate speech and harassment. Explaining how YouTube was once again struggling to interpret its own rules, technical critics came up with ways to improve the content moderation rules it and other platforms could adopt. And all of this took place in the shadow of what appears to be an increasingly serious effort by US regulators and lawmakers to crack down on companies like Google, YouTube's parent company, that could have used anticompetitive to reach such a scale.

Most people will probably agree that social media platforms like YouTube are messy and that every time they try to fix them, they fail in one way or another. It may be time for the government to Something, but many people assume that any action that would result would undermine freedom of speech: it is inevitable that someone who tries to silence the federal government manages to silence it

But the story of how the US government regulated mass communication offers another way to understand the problem. Social networks are small-scale communication venues, but they also serve as podiums at a height of 10,000 feet where people can spread ideas and reach millions of people in ways never technologically possible. They have consolidated audiences, dominate user time and now play a central role in the flow of political information. As soon as we start thinking about this technology, social media platforms are much more like a communications infrastructure, such as radio and television, than a place where notes or occasional missives go.

For decades radio and television have followed barely strict rules aimed at meeting the information needs of their audiences and not actively harming political discourse. The public may not have the Internet as it does on the air, but they are not completely different. Internet is a resource created by government researchers. Considering the larger Internet platforms as a kind of infrastructure is a good starting point for looking at what might be a slight regulation of their broadcasting functions. Social media platforms have an impact on the public interest. And then they should serve him.

It is easy to recognize the problems associated with the current non-approach. YouTube continues to fumble is scary because it is one of the most important sources of information in the world. It is the second largest social media network and the second most popular search engine on the Internet. Social media is one of the main ways Americans access their information – two-thirds of US adults use social media to get information. Worldwide, people watch more than a billion hours of YouTube a day. Like radio or television, social media plays a central role in providing people with the information they need to participate meaningfully in political life.

Social networks are small-scale communication venues, but they also serve as podiums at a height of 10,000 feet where people can reach millions of ways unprecedented in technology. possible.

The fact that the content usually comes from users without centralized editorial oversight is one of the big differences between social and traditional media. Another key difference between technology platforms and streaming is that many creators with huge audiences on YouTube and Facebook are peddling hate speech and dangerous misinformation. It would be much, much more difficult for Alex Jones or Laura Loomer or a Russian agent posing as an American racist to get a national radio or television show and reach the same audience. It is therefore logical that these platforms appeal to both bigots, conspiracy theorists and trolls who have used their megaphones to spread misinformation and hatred. Before social media, where could they find such a large audience? In response, the Congress has regularly held hearings over the past two years on the negative externalities of social media, attracting everyone, from CEOs such as Mark Zuckerberg and Sundar Pichai to pseudo-right-wing artists like Diamond and Silk.

At these hearings, legislators threatened to regulate the industry, prompting the platforms to try to self-regulate, even if nothing really happened. "I do not want to vote for regulating Facebook, but I will do it by God," Sen. John Kennedy, a Republican from Louisiana, said last year. Speaker of the House of Representatives, Nancy Pelosi, said on Monday that "the era of self-regulation is over." But no one points to a specific bill or even a policy idea. (With the exception of Senator Elizabeth Warren, who wishes to separate Facebook.) On Wednesday, YouTube announced the implementation of 30 new rules of use in 2018, many of which aimed to remove or limit the spread hate speech. Fallow land infested with bigot. In part, this is because every time YouTube makes a rule, its most heinous users apply to walk to the line they can not cross.

Legislators do not seem to have a clear idea of ​​what to do here. This is understandable. Regulating what a company can or can not host on its site is a hindrance for a reason: freedom of expression is really, really important. People who use these platforms to spread their hate speech know it, thus creating a "censorship" whenever a company takes steps to limit the spread of sectarianism.

One of the problems with this logic is that private companies can do what they want. The first amendment does not apply to what Facebook authorizes and does not allow. But if we are not comfortable with Facebook and YouTube all of this power over the way in which citizens debate, it is not foolish to insist that the government play a carefully defined role. That would not mean that the federal government would decide what is or is not an acceptable platform speech. But that could mean rules banning the dissemination of hateful opinions or misinformation to a wide audience. Freedom of expression is not the same as the freedom to broadcast that speech.

This is a point that lawmakers at the outset of the broadcast seemed to understand well. The Communications Act, 1934, which created the Federal Communications Commission, states that broadcasting licensees operate in the "public interest, convenience and necessity" of the communities they serve. . This public interest standard has proven difficult to define over the decades. Regulators, industry lobbyists, lawyers and communication specialists have written thousands of pages to discuss the outlines. Before Ronald Reagan took up the position of president and complied with a large number of federal regulations on US industries, the FCC required that all entities with a broadcast license adhere to some tricky rules to ensure that they act in the best interests of their audiences. .

These included rules such as the doctrine of fairness, which required broadcasters to devote at least some time to important and politically controversial issues and to the broadcasters doing their best to ensure that they were free. Ensure that representatives from different points of view can communicate their positions. Broadcasters were required to spend some time covering public affairs and to reduce excessive advertising. There was also a rule in the 1970s requiring broadcast license holders to hold annual interviews with community leaders, such as local clergy, industry leaders, union leaders, and advocates of the community. community, and send random polls to listeners to ensure diversity of communities. they were broadcasting to were served by their programming. The public interest obligation has also been interpreted as requiring a diversity of ownership of broadcasting stations in order to maintain a diversity of viewpoints, which protects against the control of a single undertaking of all radio and television stations on the same market.

At the beginning of the 20thth century, the United States has chosen to give priority to private access to public airwaves rather than to a model that favors public control of the airwaves, such as the BBC. Given that broadcasters are allowed to use the public airwaves, which are essentially free monopolistic rights over a public resource they could profit from, the logic was to give something back to the Americans. "The idea of ​​the public interest at least implicitly indicated that there was always some sort of market failure – that the market would not be fully dealing with our media system, especially with respect to democratic obligations, so this special category of public interest, "says Victor Pickard, professor of media studies at the University of Pennsylvania and author of The American battle for media democracy.
There were two main arguments for imposing public interest obligations on broadcasters. One was that the waves are rare. Prior to the transition to digital broadcasting, there was only a limited number of stations capable of operating in an area without causing signal interference. As the public resource was limited, if you had the chance to get a license to use the airwaves, you must abide by certain rules.

These rules were established by politicians who admitted that they did not understand the technology they regulated. (Sounds familiar?) As Senator Key Pittman of Nevada said in 1926, referring to the first calls to help the police on an increasingly crowded radio dial: "I do not think so, sir. that for the past 14 years I have been here, the Senate has been confronted with a question that, by the very nature of what senators may know so little about this subject. But that did not stop legislators from creating a federal body – the Federal Radio Commission – to find a solution. Today, arguments that legislators are too late to regulate these companies abound, but technological knowledge is far less important than understanding the importance of protecting our information systems so that democracy works well.

The current public interest requirements of broadcasters are almost unrecognizable from the form they took prior to the deregulation of the Reagan era – much of the license renewal process was reduced to answering a few questions about an online form. Regulating social media platforms in the same way that broadcasters did decades ago makes no sense. But the old debates on broadcasting could help lawmakers to tackle the problem of mastering the new giants of communication today. They need to recognize that the critical communications infrastructure Americans rely on to stay informed should have guardrails to keep them running, at least to some degree, in the public interest. Today, this could mean rules against the spread of hate speech that will not fail to reach a wide audience, such as a channel that has more than a large number of subscribers. (If they have less and if a platform does not optimize them in the algorithm, they become less critical to deal with.) This could imply the obligation to regularly report efforts to eliminate false viral information or regular reports on the elimination of foreign and domestic actors who create fake. accounts intended to meddle in electoral politics. This could mean clear requirements for the responsible management of user data or prohibiting ads that discriminate, such as in housing and employment.

The communication infrastructure, especially when this technology has a broadcast function, is powerful. People can use it to misinform and spread hatred to an audience of millions every day. People also rely on social media for information about where they will send their children to school, their health care and who they will vote for. Politicians who are now thinking about how to deal with the mess that has become social media could find inspiration in the policies that have guided broadcast technology for decades – policies based on the idea that Companies that want to earn as much money as possible will always give priority to profits everything else. Protecting the security of their users will always come second. This is where the laws are supposed to come into effect. It is time for us to have one.

Time of the future
is a partnership of
Slate,
New America, and
University of the State of Arizona
which examines emerging technologies, public policies and society.

[ad_2]

Source link