Big Tech companies, including Google and Twitter, are pulling the plug on disfavored posts, websites, and even people. They rely on section 230 of the 1996 Communications Decency Act to justify censorship.
One way around section 230 is to enact state laws that ban viewpoint discrimination by tech companies. I discussed that project here and here. John followed up with this post about his efforts to advance such legislation in Minnesota.
Prof. Phillip Hamburger suggests another approach. In this Wall Street Journal op-ed, he argues that “the Constitution can crack Section 230,” by causing courts to construe it as not providing Big Tech with a license to censor with impunity.
Section 230 states:
No provider or user of an interactive computer service shall be held liable on account of . . . any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.
But can the government really privatize the censorship of constitutionality protected material? Hamburger thinks not.
He notes that “seventeenth-century censorship, which the First Amendment clearly prohibited, was also imposed largely through private entities, such as universities and the Stationers’ Company, England’s printers trade guild.” Hamburger continues:
Some of the material that can be restricted under Section 230 is clearly protected speech. Consider its enumeration of “objectionable” material. The vagueness of this term would be enough to make the restriction unconstitutional if Congress directly imposed it. That doesn’t mean the companies are violating the First Amendment, but it does suggest that the government, in working through private companies, is abridging the freedom of speech.
This constitutional concern doesn’t extend to ordinary websites that moderate commentary and comments; such controls are their right not only under Section 230 but also probably under the First Amendment. Instead, the danger lies in the statutory protection for massive companies that are akin to common carriers and that function as public forums.
The First Amendment protects Americans even in privately owned public forums, such as company towns, and the law ordinarily obliges common carriers to serve all customers on terms that are fair, reasonable and nondiscriminatory. Here, however, it is the reverse. Being unable to impose the full breadth of Section 230’s censorship, Congress protects the companies so they can do it.
Some Southern sheriffs, long ago, used to assure Klansmen that they would face no repercussions for suppressing the speech of civil-rights marchers. Under the Constitution, government cannot immunize powerful private parties in the hope that they will voluntarily carry out unconstitutional policy.
There is also the question of whether Congress has the power under the Commerce Clause to immunize restrictions by carriers on constitutionality protected speech. Hamburger writes:
The expansion of the commerce power to include regulation of speech is. . .worrisome. This is not to dispute whether communication and information are “commerce,” but rather to recognize the constitutional reality of lost freedom. The expansion of the commerce power endangers Americans’ liberty to speak and publish.
That doesn’t necessarily mean Section 230 is unconstitutional. But when a statute regulating speech rests on the power to regulate commerce, there are constitutional dangers, and ambiguities in the statute should be read narrowly.
Hamburger identifies several ambiguities in Section 230. First, what does Section 230(c) mean when it protects tech companies from being “held liable” for restricting various sorts of speech? This is widely assumed to mean they can’t be sued. But Hamburger points out that the word “liable” has two meanings.
In a civil suit, a court must first consider whether the defendant has violated a legal duty or someone else’s right and is therefore legally responsible. If the answer is yes, the court must decide on a remedy, which can include damages, injunctive relief and so forth.
The term “held liable” as used in Section 230(c) can fall into either category. Thus, the protection of tech companies from being “held liable” may merely mean they can’t be made to pay damages, not that they can’t be held responsible and subjected to other remedies.
The former interpretation seems more plausible, if only because a mere ambiguity seems a weak basis for barring a vast class of plaintiffs from recourse to the courts on a matter as central as their speech.
There is also the matter of the “material” that the companies can restrict without fear of being sued for damages. Hamburger explains:
Section 230(c) protects them for “any action voluntarily taken in good faith to restrict access to or availability of material” of various sorts. Even before getting to the enumerated categories of material, it is important to recognize that the statute refers only to “material.” It says nothing about restricting persons or websites.
To be sure, the statute protects the companies for “any action” restricting the relevant material, and if taken literally “any action” could include various nuclear options, such as barring persons and demonetizing or shutting down websites. But the term “any action” can’t be taken to include actions that restrict not only the pertinent material but also other things. ”Any action” has to be focused on such material. . .[not] other things, such as websites and persons.
There’s a whole lot more about the statute in Hamburger’s piece. I encourage you to read the whole thing.
The bottom line is that, given the constitutional concerns about section 230(c), courts should read it narrowly. And read narrowly, the section offers Big Tech considerably less leeway to censor than Big Tech now exercises.