Free Speech in the Age of Digital Platforms

The relevant question is this: how should content platforms go about regulating users' speech, if at all?

Last week Facebook, Google, and Apple removed videos and podcasts by the prominent conspiracy theorist Alex Jones from their platforms (Twitter did not). Their actions may have prompted increased downloads of Jones’ Infowars app. Many people are debating these actions, and rightly so. But I want to look at the governance issues related to the Alex Jones imbroglio.

The tech companies have the right to govern speech on their platforms; Facebook has practiced such “content moderation” for at least a decade. The question remains: how should they govern the speech of their users?

Private Actors and the Limits of "Free" Speech

The question has a simple, plausible answer. Tech companies are businesses. They should maximize value for their shareholders. The managers of the platform are agents of the shareholders; they have the power to act on their behalf in this and other matters. (On the other hand, if their decision to ban Jones was driven by political animus, they would be shirking their duties and imposing agency costs on shareholders). As private actors, the managers are not constrained by the First Amendment. They could and should remove Alex Jones because they reasonably believed he drives users off the platform and thereby harms shareholders. End of story.

Platform managers are free of the First Amendment, but not of those cultural expectations.

For many libertarians, this story will be convincing. But others, not so inclined to respect private economic judgments, may not be convinced. I see two limits on business logic as a way of governing social media: free speech and fear.

Elites in the United States value free speech in an abstract sense, apart from legal limits on government. Platform managers are free of the First Amendment, but not of those cultural expectations. Fear informs online struggles over speech.

The right believes that platform managers are overwhelmingly left-leaning and responsive to the values of the left. They believe tech companies are trying to drive everyone on the right off their platforms and into the political wilderness (or worse).

The left fears people like Alex Jones having access to a mainstream audience leading to electoral victories by authoritarians. And yet, if the left were to gain enough power to fulfill the right’s fears about de-platforming, the latter might fight back and force the platforms to remove their enemies, a victory that would leave the left in the wilderness (or worse). The cultural power of the left might yet be trumped by the political power of the right and, on another day, vice versa.

The Issue of Legitimacy

Such is the foundation of the platforms’ dilemma. Protecting free speech short of incitement to violence heightens fear while tamping down fear offends free speech. How to cope with the dilemma?

The platforms need legitimacy for their governance. In other words, they need for users (and others) to accept their right to govern (including the power to exclude). Legitimacy would confer authority on the decisions of the platform managers. Max Weber distinguished three kinds of authority rooted in different ways of gaining legitimacy. Two of the three seem irrelevant here. Users are unlikely to accept content moderation because Mark Zuckerberg is a special person with unusual powers (charismatic authority). They also are unlikely to accept Facebook’s power because things have always been done that way (traditional authority). What Weber called rational-legal authority seems to be the only choice for the platforms. In other words, they need a process (or due process) that looks like the rule of law (and not the rule of tech employees).

American free speech doctrine largely precludes drawing such lines. Facebook is committed to doing so.

Facebook seems to be trying to establish rational-legal authority. It set out Community Standards that guide governing speech. Why should that “basic law” be accepted by users? One answer would be the logic of exchange. When you use Facebook for free, you give them in return data and consent to their basic law. That looks a lot like the tacit consent theory that has troubled social contract arguments for political authority. In any case, Facebook itself sought comments from various groups and individuals—that is, stakeholders—about the Community Standards. The company itself wanted more than a simple exchange.

But do the Community Standards respect the culture of free speech? Facebook has banned speech that includes “direct attacks on people based on what we call protected characteristics—race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.” The speech banned here is often, if loosely, called “hate speech.” Their basic law thus contravenes American free speech legal doctrine. Hate speech is protected by the First Amendment, but not by Facebook.

I conclude that either Facebook’s standard violates the culture of free speech or it reflects a difference between the culture of free speech (which does not include hate speech) and American First Amendment legal doctrine. If the latter, Facebook’s recognition of the difference will foster a greater gap between culture and law.

Facebook’s idea of hate speech may be a relatively limited and stable exception to a generally open platform. But community standards for social media are said to be “living documents” that change over time. And hate speech itself is an ambiguous term. Perhaps all mention of The Bell Curve might one day be removed from Facebook as a direct attack on African Americans. Removing Alex Jones should not prompt fear in any reasonable conservative. Banning Charles Murray should and would. And what of the vast landscape between those two figures? American free speech doctrine largely precludes drawing such lines. Facebook is committed to doing so.

Cue the Politicians

This asymmetry between inside the companies and outside is not good for the freedom of speech.

The political context matters in applying standards. In one way, Facebook is less like a court and more like a legislature: it is beset by highly organized interests who seek to convince Facebook to remove content. As with legislatures, those who are organized are those who are heard from, and policy reflects what policymakers hear.

With regard to content moderation, people who lean left are organized and on the inside. This observation applies to the employees of tech firms, the academic experts from whom they seek advice, and the groups organized to guide their decisions about what is removed from the platforms. In contrast, the right is on the outside and less organized, more or less reduced to having elected officials complain about the platforms. How long before cheap talk gives way to serious actions? This asymmetry between inside the companies and outside is not good for the freedom of speech. It is also not good for the legitimacy of content moderation.

As a legal matter, social media companies have broad discretion to police their platforms. That is how it should be. But they need to make their authority legitimate. If they do not, elected officials may one day act to compel fairness or assuage fears. As always, that will not be good news for the freedom of speech or limited government.

Reprinted from Cato at Liberty.