Scroll Top

Two different proposals to amend Section 230 share similar goal: Damage online users’ speech (Demo)

This statement was originally published on eff.org on 18 June 2020.

Whether we know it or not, all Internet users rely on multiple online services to connect, engage, and express themselves online. That means we also rely on 47 U.S.C. § 230 (“Section 230”), which provides important legal protections when platforms offer their services to the public and when they moderate the content that relies on those services, from the proverbial cat video to an incendiary blog post.

Section 230 is an essential legal pillar for online speech. And when powerful people don’t like that speech, or the platforms that host it, the provision becomes a scapegoat for just about every tech-related problem. Over the past few years, those attacks have accelerated; on Wednesday, we saw two of the most dangerous proposals yet, one from the Department of Justice, and the other from Sen. Josh Hawley

The proposals take different approaches, but they both seek to create new legal regimes that will allow public officials or private individuals to bury platforms in litigation simply because they do not like how those platforms offer their services. Basic activities like offering encryption, or editing, removing, or otherwise moderating users’ content could lead to years of legal costs and liability risk. That’s bad for platforms – and for the rest of us.

DOJ’s Proposal Attacks Encryption and Would Make Everyone’s Internet Experience Less Secure

The Department of Justice’s Section 230 proposal harms Internet users and gives the Attorney General more weapons to retaliate against online services he dislikes. It proposes four categories of reform to Section 230.

First, it claims that platforms need greater incentive to remove illegal user-generated content and proposes that Section 230 should not apply to what it calls “Bad Samaritans.” Platforms that knowingly host illegal material or content that a court has ruled is illegal would lose protections from civil liability, including for hosting material depicting terrorism or cyber-stalking. The proposal also mirrors the EARN IT Act by attacking encryption: it conditions 230 immunity on whether the service maintains “the ability to assist government authorities to obtain content (i.e., evidence) in a comprehensible, readable, and usable format pursuant to court authorization (or any other lawful basis).”

Second, it would allow it and other federal agencies to initiate civil enforcement actions against online services that they believed were hosting illegal content.

Third, the proposal seeks to “clarify that federal antitrust claims are not covered by Section 230 immunity.”

Finally, the proposal eliminates key language from Section 230 that gives online services the discretion to remove content they deem to be objectionable and defines the statute’s “good faith” standard to require platforms to explain all of their decisions to moderate users’ content.

The DOJ’s proposal would eviscerate Section 230’s protections and, much like the EARN IT Act introduced earlier this year, is a direct attack on encryption. Like EARN IT, the DOJ’s proposal does not use the word encryption anywhere. But in practice the proposal ensures that any platform providing secure end-to-end encryption would face a torrent of litigation—surely no accident given the Attorney General’s repeated efforts to outlaw encryption.

Other aspects of the DOJ’s “Bad Samaritan” proposals are problematic, too. Although the proposal claims that bad actors would be limited to platforms that knowingly host illegal material online, the proposal targets other content that may be offensive but is nonetheless protected by the Constitution.

Additionally, requiring platforms to take down content deemed illegal via a court order will result in a significant increase in frivolous litigation about content that people simply don’t like. Many individuals already seek to use default court judgments and other mechanisms as a means to remove things from the Internet. The DOJ proposal requires platforms to honor even the most trollish court-ordered takedown.

Oddly, the DOJ also proposes punishing platforms for removing content from their services that is not illegal. Under current law, Section 230 gives platforms the discretion to remove harmful material such as spam, malware, or other offensive content, even if it isn’t illegal.  We have many concerns about those moderation decisions, but removing that discretion altogether could make everyone’s experiences online much worse and potentially less safe.

It’s also unconstitutional: Section 230 notwithstanding, the First Amendment gives platforms the discretion to decide for themselves the type of content they want to host and in what form.

The proposal would also empower federal agencies, including the DOJ, to bring civil enforcement actions against platforms. Like  last month’s Executive Order targeting online services, this would give the government new powers to target platforms that government officials or the President do not like. It also ignores that the DOJ already has plenty of power. Because Section 230 exempts federal criminal law, it has never hindered the DOJ’s ability to criminally prosecute online services engaging in illegal activity.

The DOJ would also impose onerous obligations that would make it incredibly difficult for any new platform to compete with the handful of dominant platforms that exist today. For example, the proposal requires all services to provide “a reasonable explanation” to every single user whose content is edited, deleted, or otherwise moderated. Even if services could reasonably predict what qualifies as a “reasonable explanation,” many content moderation decisions are not controversial and do not require any explanation, such as when services filter spam.

Sen. Hawley’s Proposed Legislation Turns Section 230’s Legal Shield Into An Invitation to Litigate Every Platform’s Moderation Decisions

Sen. Hawley’s proposed legislation, for its part, takes aim at online speech by fundamentally reversing the role Section 230 plays in how online platforms operate. As written, Section 230 generally protects platforms from lawsuits based either on their users’ content or actions taken by the platforms to remove or edit users’ content.

Hawley’s bill eviscerates those legal protections for large online platforms (platforms that average more than 30 million monthly users or have more than $1.5 billion in global revenue annually), by replacing Section 230’s simple standard with a series of detailed requirements. Platforms that meet those thresholds would have to publish clear policies describing when and under what circumstances they moderate users’ content. They must then enforce those policies in good faith, which the bill defines as acting “with an honest belief and purpose,” observing “fair dealing standards,” and “acting without fraudulent intent.” A platform fails to meet the good faith requirement if it engages in “intentionally selective enforcement” of its policies or by failing to honor public or private promises it makes to users.

Some of this sounds OK on paper – who doesn’t want platforms to be honest? In practice, however, it will be a legal minefield that will inevitably lead to overcensorship. The bill allows individual users to sue platforms they believe did not act in good faith and creates statutory damages of up to $5,000 for violations. It would also permit users’ attorneys to collect their fees and costs in bringing the lawsuits.

In other words, every user who believes a platform’s actions were unfair, fraudulent, or otherwise not done in good faith would have a legal claim against a platform. And there would be years of litigation before courts would decide standards for what constitutes good faith under Hawley’s bill.

Given the harsh reality that it is impossible to moderate user-generated content at scale perfectly, or even well, this bill means full employment for lawyers, but little benefit to users. As we’ve said repeatedly, moderating content on a platform with a large volume of users inevitably results in inconsistencies and mistakes, and it disproportionately harms marginalized groups and voices. Further, efforts to automate content moderation create additional problems because machines are terrible at understanding the nuance and context of human speech.

This puts platforms in an impossible position: moderate as best you can, and get sued anyway – or drastically reduce the content you host in the hopes of avoiding litigation. Many platforms will choose the latter course, and avoid hosting any speech that might be controversial.

Like the DOJ’s proposal,  the bill also violates the First Amendment. Here, it does so by making distinctions between particular speakers. That distinction would trigger strict scrutiny under the First Amendment, a legal test that requires the government to show that (1) the law furthers a compelling government interest and (2) the law is narrowly tailored to achieve that interest. Sen. Hawley’s bill fails both prongs: although there are legitimate concerns about the dominance of a handful of online platforms and their power to limit Internet users’ speech, there is no evidence that requiring private online platforms to practice good-faith content moderation represents a compelling government interest. Even assuming there is a compelling interest, the bill is not narrowly tailored. Instead, it broadly interferes with platforms’ editorial discretion by subjecting them to endless lawsuits from any individual claiming they were wronged, no matter how frivolous.

As EFF told Congress back in 2019, the creation of Section 230 has ushered in a new era of community and connection on the Internet. People can find friends old and new over the Internet, learn, share ideas, organize, and speak out. Those connections can happen organically, often with no involvement on the part of the platforms where they take place. Consider that some of the most vital modern activist movements – #MeToo, #WomensMarch, #BlackLivesMatter – are universally identified by hashtags. Forcing platforms to overcensor their users, or worse, giving the DOJ more avenues to target platforms it does not like, is never the right decision. We urge Congress to reject both proposals.

The post Two different proposals to amend Section 230 share similar goal: Damage online users’ speech appeared first on IFEX.

Source: MEDIA FEED

Related Posts