The recent proposals from the Department of Justice for reform of Section 230 of the Communications Decency would require social media companies, as a condition of receiving the Section 230 immunity from liability for user content, to publicly disclose their content moderation rules. DOJ would also impose the following consistency requirement: “any restrictions of access must be consistent with those terms of service or use and with any official representations regarding the platform’s content-moderation policies.”
A similar requirement for consistency appears in the White House Executive Order on Preventing Online Censorship, where the “good faith” requirement of Section 230(c)(2)(A) (granting immunity for removals when the platform acts in good faith) is interpreted to mean that the platform must act in a manner that is not “deceptive, pretextual, or inconsistent with a provider’s terms of service.” Pursuant to this Executive Order, the National Telecommunications and Information Administration filed a submission with the Federal Communications Commission asking for a rulemaking interpreting the good faith requirement in a similar way – as holding when the platform “restricts access to or availability of material or bars or refuses service to any person consistent with (italics added) publicly available terms of service or use that state plainly and with particularity the criteria the interactive computer service employs in its content-moderation practices.”
This consistency requirement seems like a bit of good common sense but in fact it turns out to be something quite a bit stronger. A platform duty of consistency with its own established standards and enforcement protocols would require two things. First, a platform would be obliged to take down a post, or take other appropriate enforcement action, only if it can show that the content violates one of its established rules. Second, if a particular post violates one of its standards, it would be obliged to take it down or take other appropriate enforcement action. In short, a consistency requirement calls for a platform to take action against all and only the material that violates its own content standards and enforcement protocols.
For our purposes, the prong of the consistency duty of interest is the one that says a platform can take down material only if that material violates its standards; otherwise, it has to carry the material. This prong of a platform consistency duty implies a user right to carriage on social media platforms unless the platform can demonstrate that the user’s message violates its public content standards.
To see how strong this access requirement is consider what would happen if it applied to the traditional media. For one thing, Bari Weiss wouldn’t have to leave the New York Times in a huff. Instead, she could just lob whatever op ed columns she wanted at the opinion page editor and when these proposed columns were rejected, she could go to court to object that this rejection was not “consistent” with the standards of the Times editorial page. If the court agreed, it would then order the Times to run the column. Conservatives might be delighted at this outcome until they realize it would also give Noam Chomsky access to Fox’s airwaves!
Of course, we’ve already been through an extended discussion of access to the traditional media and it was closed 46 years ago with the Supreme Court’s 1974 Tornillo decision ruling that such an access requirement was an unconstitutional intrusion into the editorial rights of newspapers. The only question left is whether an access requirement for platforms would fare any better.
It might seem that the DOJ requirement, at least, escapes from this First Amendment trap because it is phrased not as a free-standing requirement, but as a condition on platform receipt of a government benefit, namely, Section 230 immunity from certain forms of liability for user content. But as I’ve argued elsewhere, these DOJ. proposals are best evaluated when they have been stripped of their irrelevant context of Section 230 reform, because they do not provide an effective enforcement mechanism. Why should platforms care if they lose immunity for liability for not carrying a certain message, if there is no liability to carry that message in the first place?
But even as a condition on a grant of government benefit, an access requirement might also face constitutional challenges, although the argument is a bit more detailed and a lot more needs to be said than what follows. But, in general, under the doctrine of unconstitutional conditions, the government cannot make receipt of a benefit conditional on giving up constitutional rights. Welfare payments, for instance, cannot be conditioned on giving up the right to vote or fair trial.
If imposing a free-standing access requirement on a platform is an unconstitutional abridgement of its editorial discretion, then imposing it as a condition for Section 230 immunity is likely to be a constitutional violation as well. One relevant example comes from public broadcasting. When Congress tried to take away the right of public television and radio stations to engage in editorializing if they accepted Federal support, the Supreme Court in the 1984 League of Women Voters v FCC case ruled that this conditional ban on editorializing violated the First Amendment because it far exceeded what was necessary to achieve any substantial government interest such as preventing government interference in public broadcasting content.
So we should evaluate the DOJ and related Administration proposals for social media consistency as a free standing requirements. I sympathize with the instinct that leads to a consistency requirement and in an article written over a year ago, I flirted with it. But it leads to intolerable free expression paradoxes and should be rethought. It’s worthwhile reviewing how policymakers got into this conundrum.
What Motivates A Consistency Requirement?
In that earlier article, I argued for a consumer protection approach to content moderation regulation, which still seems right to me. It would require due process requirements such as notices, explanations and appeal rights when social media companies take action or refuse to take action in connection with the material posted on their platform by their users. The article also suggested, however, a version of a consistency requirement. It said that, as part of its responsibility to prevent deceptive acts or practices, the Federal Trade Commission should be empowered by a new law to “investigate whether a company acts in accordance with its disclosed content moderation program.” The agency must “require a platform to adhere to its disclosed policy.”
This consistency rule and the grant of enforcement authority was intended to address the problem that a social media platform might announce a robust content moderation program with, for instance, an explicit policy of banning hate speech, but then do nothing to enforce the policy. This would amount to bait-and-switch, luring in users with an essentially misleading representation that the platform had no intention of living up to. This seemed like an almost paradigm case of deceptive conduct, which the FTC should be able to address using its authority to prevent deceptive acts or practices.
Some kind of consistency requirement, moreover, seems to be required to respond to the common complaint of platform bias against conservative speakers. Suppose a platform has a rule against hate speech, and right-wing speakers say it was applied inconsistently to penalize them and not equivalent left-wing speakers, and the platform, in response to complaints, says the speakers are not equivalent, the right-wingers are worse. If the FTC or the courts can’t second-guess a platform’s content judgments, how is that different from the status quo, where the aggrieved speakers think they have no recourse against platform arbitrary decision making?
The Policymaker’s Consistency Dilemma
But here’s the problem. How is the FTC to enforce a consistency requirement that platforms do what they say when they say they ban hate speech without making judgments about what is and what is not hate speech? If the agency is to make sure that platforms do not deceive their users in their content moderation disclosures, it must assess the fit between what a platform says it will permit or remove and what it actually does in practice. But the only way to determine this fit is to examine the content judgments of the platforms and second guess whether they were in accordance with the platform’s announced content rules.
The problem with this role for a Federal regulatory agency is not hard to see. When did the FTC become an expert in interpreting a platform’s hate speech standards? Or platform rules related to disinformation campaigns or terrorist material or the thousand and one content issues that arise daily in the course of running even a modestly comprehensive platform content moderation program? And why is the FTC’s interpretation of the platform standard’s any better than the platform’s own interpretation? Will the FTC’s interpretation be subject to judicial review? If so, this just pushes the question to a higher level. When did the courts develop this interpretive expertise? Why is theirs any better than the platform’s? Even more worrisome, of course, is that almost none of the speech subject to platform content moderation rules would be illegal under U.S. law, and so the entire exercise, as we have seen, becomes fraught with First Amendment issues.
I tried to fudge this issue in the earlier article by saying that the agency would not oversee the fit between a platform’s rules and conduct but would instead defer to the good faith judgment of the platform. But unless the agency on occasion overrode the platform judgment, this would amount to an abdication of any enforcement role. It would be as if, in its enforcement of the rules against deceptive advertising, the FTC always deferred to the advertiser’s good faith judgment about the validity of its advertising.
As a result, policymakers seeking to establish an effective content moderation regulatory regime are in a pickle. If they have any consistency requirement at all, as the DOJ the White House and NTIA recommend and as I seemed to suggest in my earlier article, it has to be enforced. But this means a regulatory agency or the courts, or both, will have to stand in judgment on platform content moderation decisions, creating perhaps insuperable First Amendment issues.
Some Platform Violations of European Consistency Requirements
To see more clearly how a consistency rule becomes a must carry requirement and how it might work in practice, let’s examine the recent Italian case of CasaPound v. Facebook. In December 2019, the Court of Rome issued a temporary injunction that Facebook must reactivate the account of the Italian neo-fascist party, CasaPound. Facebook had deactivated the party’s account on the grounds that CasaPound’s posts contained hate speech and incitement to violence, in violation of Facebook’s Community Standards. The Court found, however, that this action violated CasaPound’s constitutional rights as a political party to participate in public debate and “contribute by democratic means to national policy.”
The Court did not limit itself to this constitutional argument, however. It also determined that Facebook was required by Italian law to prove violations of its community standards before removing any content and Facebook had failed to prove a causal link between the violence perpetrated by CasaPound and its online content. The court, in other words, thought Facebook had misapplied its own standards against hate speech and incitement to violence and the court substituted its own interpretation of what these rules must mean. It based its authority to second guess Facebook’s application of its own content rules in part on the dominance of Facebook as a social media platform.
As Matthias C. Kettemann and Anna Sophia Tiedeke note in a recent article, German courts have issued similar injunctions requiring Facebook to restore content they had deleted, although these decisions seem to focus more on the impropriety of Facebook taking down legal speech under the German constitution, even if such speech clearly violated Facebook community standards. In one case, the court ordered Facebook to allow a right-wing party to access its Facebook page and resume posting after Facebook had banned the group for violation of its hate speech standard. Social media companies, these authors conclude, have the right under German law to delete legal user content, “but only as long as deletion is not performed arbitrarily, and users are not barred from the service without recourse.” This duty to avoid arbitrary decision-making in its content moderation decisions follows in part, according to the German courts, from the fact that a social media company is a “provider of essential services” and so takes over some of the responsibilities of the state.
The European Approach Wouldn’t Really Work in the U.S.
As these cases reveal, when examined more closely in practice, a consistency requirement works like a carriage right. It gives platform users a right to be carried on a social media platform unless reasoned decisions made in accordance with public standards justify removal. Courts enforce this carriage right by requiring social media companies to restore content they have deleted for violation of their content rules.
Despite the formidable First Amendment obstacles described earlier, scholars such as Daphne Keller have looked pretty hard to find a basis for a platform carriage right in U.S. law. One possibility is to think of platforms as public fora for First Amendment purposes. But the courts have already ruled that out. In Prager University, the U.S. Court of Appeals for the Ninth Circuit said that a social media platform like YouTube is “not a public forum subject to judicial scrutiny under the First Amendment.”
The closest examples to a platform carriage rights in current statutory law are the requirements granting reasonable access to political candidates and no censorship rights for campaign messages on broadcast stations and local cable operators. Broadcaster rights to carriage on local cable systems might be another example. The Supreme Court in Turner v. FCC upheld this carriage right against First Amendment challenge in part because cable companies control a “critical pathway of communication” for local broadcast stations.
Those seeking to establish a similar must carry rule for platforms might contemplate whether the increasing dependence of speakers such as local newspapers on social media to reach their audience might provide a similar defense of a platform carriage right against a First Amendment challenge.
Platform Moderation Regulation Without A Consistency Requirement: The PACT Act
But, to be clear-eyed about the current state of U.S. First Amendment jurisprudence, which prioritizes the speech rights of media companies and platforms, this is a long shot. Perhaps the only way forward consistent with the First Amendment would be to eschew all notions of consistency in content moderation regulation. This is the choice Senators Brian Schatz and John Thune make in their bi-partisan legislation, the Platform Accountability and Consumer Transparency (PACT) Act. The bill would mandate transparency and due process requirements for social media companies to be enforced by the Federal Trade Commission as free-standing requirements. (Other parts of the PACT Act would reform Section 230 liability.)
The bill’s authors seem aware of the speech problems created if a company’s voluntarily adopted content rules are interpreted and enforced by a government agency. The bill contains no requirement that a social media company’s content moderation decisions must be based on, adhere to, or be consistent with its published standards. Daphne Keller’s recent review of the bill seems to miss this. Although she recognizes the bill’s focus on process and not the substance of platform content decisions, she suggests at one point that the bill would require platforms to “state the rules clearly and enforce them consistently.” Such a consistency requirement is nowhere in the bill.
Instead, the bill’s requirements are entirely oriented to process, mandating publication of an acceptable use policy, a complaint process requiring a response to complaints within 14 days and the production of a public quarterly transparency report. The enforcement provision, Section g(1)(A) of the bill, is very precise in limiting the FTC’s enforcement powers to just the requirement for a timely response to complaints and to the production of the quarterly transparency report. The mandate to publish an acceptable use policy is notsubject to FTC enforcement. Furthermore, to ensure against agency second-guessing platforms, Section g(1)(B) of the bill specifically and explicitly limits the agency’s enforcement authority, saying the bill does not “authorize the Commission to review any action or decision by a provider of an interactive computer service related to the application of the acceptable use policy of the provider.”
This seems like a thoughtful way forward. There is little danger under the bill that the agency could order a platform to take down a post or to mandate that an account be restored. But a bill’s strengths are often right next to its weaknesses. In this case, the bill has just embraced the other horn of the policymaker’s dilemma posed earlier. Abandoning a consistency requirement and limiting agency enforcement creates its own difficulties. Without an agency enforcement role over issuing and adhering to its own standards, it seems a company could just put out whatever rules it wants and then act in accordance with them or not as it sees fit in specific cases. It could even say in its rules that it reserves the right to take down any content for any reason whatsoever, thereby allowing itself to engage in essentially arbitrary, unreasoned content moderation decision making. The transparency reports would let the public know about these abuses but would not themselves force any changes in platform conduct.
Agency oversight still seems necessary for the effective operation of any due process requirement for platform content moderation. It is needed to put teeth in even the basic requirement that social media companies publish the content rules that govern their content moderation decisions. How to do this without creating insuperable First Amendment issues remains a problem in search of a solution.
Legislation establishing a regulatory framework for social media content moderation is urgently needed. Content moderation is too important for the future of effective political decision making in the United States to be left solely to the unfettered discretion of increasingly powerful digital platforms. But one reason Congress has an established legislative process of hearings and consultations with the public and affected parties is to make sure that all issues are properly aired and vetted so that the final legislation is well-crafted to achieve its intended results. This might well be a custom more honored in the breach than the observance these days, but in this case, a legislative shortcut would not be a good idea. A full, fair and open Congressional process is needed to help resolve these thorny issues.