Facebook, Twitter and Google CEOs Testify Before Congress About Disinformation



[ad_1]

Members of the House Energy and Commerce Committee should pressure Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Twitter CEO Jack Dorsey on their platforms’ efforts to stem fraud allegations baseless election campaign and vaccine skepticism. Opaque algorithms that prioritize user engagement and promote disinformation could also come under close scrutiny, a panel note hinted.

The tech platforms, which had already faced intense pressure to tackle disinformation and foreign interference ahead of the 2020 election, came under more scrutiny in the following months. While some companies have rolled out new measures to crack down on electoral conspiracy theories, it was not enough to stop diehard supporters of President Donald Trump from storming the U.S. Capitol.

The hearing also marks the first return of CEOs to Congress since Trump was banned or suspended from their respective platforms in the wake of the Capitol riots. In their prepared remarks, some of the leaders address the events of January 6 head-on.

“The attack on Capitol Hill was a horrific assault on our values ​​and our democracy, and Facebook is committed to helping law enforcement bring the insurgents to justice,” Zuckerberg testified. But Zuckerberg also adds, “We are doing more than any other company to fight disinformation.”

The hearings coincide with legislation under consideration in both the House and Senate to curb the tech industry. Some bills target the economic domination of companies and alleged anti-competitive practices. Others focus on the platforms’ approach to content moderation or data privacy. The various proposals could introduce stringent new requirements for technology platforms, or expose them to increased legal liability in ways that reshape the industry.

For leaders in the hot seat, Thursday’s session may also be their last chance to personally present a case to lawmakers before Congress embarks on potentially sweeping changes to federal law.

At the heart of the upcoming political battle is Section 230 of the Communications Act of 1934, the signature liability shield that grants websites legal immunity for much of the content posted by their users. Members from both sides called for updates to the law, which has been interpreted broadly by the courts and is credited with the development of the open Internet.

What the Biden administration means for the future of Silicon Valley

Written testimony from CEOs ahead of Thursday’s high-profile hearing sketches areas of potential common ground with lawmakers and alludes to areas in which companies intend to work with Congress – and areas in which lawmakers Big Tech is likely to push back.

Zuckerberg plans to advocate for reducing the scope of Section 230. In his written remarks, Zuckerberg says Facebook favors a form of conditional liability, where online platforms could be sued for user content if companies do not follow certain best practices established by an independent. third party.
The other two CEOs do not jump into the Article 230 debate or discuss the role of government with such granularity. But they do offer their general visions of content moderation. Pichai’s testimony calls for clearer content policies and a way for users to appeal content decisions. Dorsey’s testimonial reiterates her calls for more user-driven content moderation and the creation of better settings and tools for users to personalize their online experience.
At present, CEOs have a great deal of experience giving testimony to Congress. Zuckerberg and Dorsey recently appeared before the Senate in November on content moderation. And before that, Zuckerberg and Pichai testified in the House last summer on antitrust matters.
In the days leading up to Thursday’s hearing, the companies argued that they had acted aggressively to counter disinformation. Facebook announced Monday that it had deleted 1.3 billion fake accounts last fall and now has more than 35,000 people working on content moderation. Twitter said this month he would start applying warning labels to misinformation about the coronavirus vaccine, and he said repeated violations of his Covid-19 policies could result in permanent bans. YouTube announced this month that it had removed tens of thousands of videos containing misinformation about the Covid vaccine, and in January, following the Capitol riots, it ad it would restrict channels that share false statements doubting the outcome of the 2020 election.

But these claims of progress are unlikely to appease committee members, whose memo cited multiple research papers indicating that disinformation and extremism are still rampant on the platforms.



[ad_2]

Source link