Facebook Bans Donald J. Trump for Two Years: The Anatomy of an Unprecedented Social Media Decision

By Stephen RohdeJune 16, 2021

Facebook Bans Donald J. Trump for Two Years: The Anatomy of an Unprecedented Social Media Decision
AS THE VIOLENT January 6 insurrection at the US Capitol reached a fever pitch, President Donald J. Trump took to Facebook to praise the insurrectionists. “We love you.” “You’re very special.” You’re “great patriots.” “Remember this day forever.” The next day, Facebook indefinitely banned Trump from Facebook and its subsidiary, Instagram. When Trump and his supporters cried foul, Facebook announced on January 21 that it had referred the case to its new quasi-judicial Oversight Board, made up of 20 former political leaders, law professors, human rights activists, and journalists.

On May 5, the Oversight Board upheld Facebook’s decision to suspend Trump but ruled that it was inappropriate for Facebook to make the suspension indefinite. The Board insisted that, within six months, “Facebook review this matter to determine and justify a proportionate response that is consistent with the rules that are applied to other users of its platform.”

Less than a month later, on June 4, Facebook accepted the Board’s decision and announced “new enforcement protocols to be applied in exceptional cases such as this.” Given “the gravity of the circumstances that led to Mr. Trump’s suspension, we believe his actions constituted a severe violation of our rules which merit the highest penalty available under the new enforcement protocols.” Facebook suspended his accounts for two years until January 7, 2023, a period which even casual observers realized will continue past the midterm elections in November 2022.

At the end of this period, Facebook “will look to experts to assess whether the risk to public safety has receded,” taking into consideration several external factors, including “instances of violence, restrictions on peaceful assembly and other markers of civil unrest.” If Facebook determines “that there is still a serious risk to public safety, we will extend the restriction for a set period of time and continue to re-evaluate until that risk has receded.” When the suspension is eventually lifted, Facebook will impose “a strict set of rapidly escalating sanctions that will be triggered if Mr. Trump commits further violations in [the] future, up to and including permanent removal of his pages and accounts.”

Facebook expressed its gratitude to the Oversight Board and but acknowledged “that we did not have enforcement protocols in place adequate to respond to such unusual events.” The next day, in a characteristically confusing rant, Trump railed against Facebook:

They say they may allow me back in two years. I’m not too interested in that. They may allow me back in two years. We got to stop that. We can’t let it happen. So unfair. They are shutting down an entire group of people. Not just me. They are shutting down the voice of a much more powerful and a much larger group.


The Oversight Board’s decision is remarkable and unprecedented. It represents the most comprehensive effort ever undertaken by a private group of experts to evaluate a decision by the most popular social media platform in the world to ban the most powerful political leader in the world. However, the decision is not without its flaws.

The decision reveals some fundamental weaknesses in the structure of the Board that could impair its future decisions: on the crucial question of whether Facebook could indefinitely ban Trump, the Board punted the ball back to Facebook. Also, during the proceedings, Facebook arrogantly refused to answer several key questions posed by the Board, which was powerless to do anything about it. The decision contains a series of critical findings by a minority of the Board. Had they been adopted by the entire Board, the decision to ban Trump would have been significantly strengthened, and Facebook would have been forced to be far more transparent. Unfortunately, neither the number of Board members in the minority nor their identities were disclosed.

Since decisions of the Board are expected to set precedents for future cases, it is imperative that the public be able to track the voting records of the Board’s members to detect any favoritism or other patterns based on their profiles and experience. As with courts of law, including the US Supreme Court, transparency is crucial to maintaining legitimacy.

But importantly, the decision to ban Trump helps put to rest the popular misconception that when Facebook removes content or restricts a user’s access to his or her Facebook page, Facebook is engaging in “censorship.” This is not censorship. As a private company, Facebook is enforcing the Terms of Service to which every user agrees as a condition of using its platforms. While everyone has a right to free speech, everyone does not have an unfettered right to transmit that speech through a private company’s platform. I have a right to write an opinion piece; I don’t have a right to publish it in The New York Times.

Censorship is when an arm of the government — federal, state, or local — restricts someone’s right to free speech, on pain of punishment. Censorship is a violation of the First Amendment. By enforcing private contracts with users, Facebook, Instagram, and other social media platforms are not violating the First Amendment.

Nevertheless, it remains true that profound political, social, and free speech issues are implicated in how a social media giant with 2.8 billion monthly active users, who together view over four billion videos every day, decides what content is sufficiently objectionable to be removed. Consequently, the Oversight Board’s May 5 decision deserves serious examination.

“The Facebook Supreme Court”

Plagued by complaints that Facebook was tolerating too much — or too little — free speech, in November 2018 CEO Mark Zuckerberg approved the creation of an Oversight Board, claiming it would improve the fairness of the appeals process, create oversight and accountability from an outside source, and increase transparency. In January 2020, Facebook appointed British human rights expert Thomas Hughes as director of Oversight Board Administration. On May 6, 2020, Facebook announced the inaugural 20 members of the Board, who started accepting cases on October 22, 2020. The Board operates through a separate foundation, funding by a $130 million grant from Facebook. According to The New Yorker, the salaries of Board members are reportedly about six figures. Zuckerberg is fond of calling the Board the “Facebook Supreme Court.” The May 5 decision reveals why that is a serious misnomer.

The members of the Board include a former prime minister of Denmark, a Nobel Peace Prize laureate, and several constitutional law experts and human rights advocates. According to Facebook, Board members have lived in 27 countries and speak at least 29 languages. A quarter of the group and two of the four co-chairs are from the United States, where the company is headquartered. The co-chairs, who selected the other members jointly with Facebook, are former US Federal Circuit Judge and religious freedom expert Michael McConnell, constitutional law expert and Columbia Law School Professor Jamal Greene, Colombian attorney Catalina Botero-Marino, and former Danish Prime Minister Helle Thorning-Schmidt. The Board also includes former European Court of Human Rights Judge András Sajó, Internet Sans Frontières Executive Director Julie Owono, Yemeni Nobel Peace Prize Laureate Tawakkol Karman, Australian internet governance researcher Nicolas Suzor, and Pakistani digital rights advocate Nighat Dad. The other countries represented on the Board are Kenya, Hungary, Indonesia, Israel, Cameroon, France, Brazil, United Kingdom, Senegal, Ghana, Taiwan, and India.

Cases are prepared by five-member panels chosen at random, which must include at least one member from the region where the challenged post originated. If the panel reaches a majority decision, it is reviewed by the entire 20-member Board, which then decides the case by a simple majority vote. The decisions of the Board are binding; however, separate recommendations are not binding. Facebook has seven days to put a Board ruling into effect. As noted, unlike the US Supreme Court, the votes of individual Board members are not revealed to the public.

On January 28, 2021, the Board ruled on its first five “moderation” decisions made by Facebook, overturning four of them and upholding only one. All were unanimous, except for one. The facts in these cases are intriguing and reveal the baffling clash of values at stake when Facebook decides what to delete. This small sample also gives us a clue as to the extent to which the Board is prepared to overturn the original decisions made by Facebook.

  • A post that showed churches in Baku, Azerbaijan, had a caption (in Russian) that “asserted that Armenians had historical ties with Baku that Azerbaijanis didn’t.” The post also referred to Azerbaijanis with the ethnic slur “taziks.” The Board upheld its removal, finding that the post was harmful to both the safety and the dignity of Azerbaijanis.


 

  • In October 2020, a Facebook user in Myanmar posted images of the corpse of Kurdish Syrian toddler Alan Kurdi. The image was accompanied by text in Burmese that roughly translated to there being “something wrong” with the mindset of Muslims or Muslim men. The text contrasted terrorist attacks in France over depictions of Muhammad with, in the Facebook user’s view, silence by Muslims in response to the Uyghur genocide in China. The Facebook user argued that this conduct had led to a loss of sympathy for people like the child in the photograph. The Board decided to retranslate of the post. Although the post could be read as an insult directed toward Muslims, the Board thought it could also be read as commentary on a perceived inconsistency of reactions by Muslims to the events in France and China. Thus, the Board held the removal was improper.


 

  • In October 2020, a Brazilian woman posted a series of images on Instagram that included uncovered breasts with visible nipples. The images were part of an international effort to raise breast cancer awareness. The Instagram user claimed the photographs showed breast cancer symptoms, and indicated this in text in Portuguese. Instagram’s automated review system did not understand the text, leading to the images’ removal. These images were later restored. Facebook asked that the review be dropped as the issue was effectively over, but the Board instead chose to review the action. Because the removal had impacted the human rights of women, the Board decided that the removal was improper and recommended improvements to the decision-making process for the removal of such posts. To prevent this from happening again, the Board recommended that users be informed of the use of automated content review mechanisms. In addition, the Board argued that Instagram community standards be revised to allow images with female nipples in posts about breast cancer awareness.


 

  • In October 2020, a French user posted a video in a Facebook group criticizing the Agence Nationale De Sécurité Du Médicament for its refusal to authorize hydroxychloroquine and azithromycin to treat COVID-19. Facebook deleted the post for spreading COVID-19 misinformation. The Board reversed Facebook’s decision. The Board recommended that rather than remove such misinformation, Facebook should correct it. Although Facebook restored the post, it did not take up the separate recommendation of the Board, noting that its approach to COVID-19 misinformation reflects the guidance of the US Centers for Disease Control and Prevention and the World Health Organization.


 

  • In October 2020, a Facebook user posted a quote incorrectly attributed to Nazi propagandist Joseph Goebbels. Facebook took down the post under its policy prohibiting the promotion of dangerous individuals and organizations. Goebbels fell under this category. The Facebook user appealed, arguing that the post was meant to comment on Donald Trump. The Board found that the evidence supported the Facebook user’s claim and held that post did not indicate support for Goebbels. The Board then ordered that it be restored. The oversight body also recommended that Facebook should indicate to users posting about people like Goebbels that “the user must make clear that they are not praising or supporting them.”


Since January, two more cases were decided:

  • In February 2021, the Board overturned the removal of a post made to a Facebook forum in October 2020 that contained an image of a TV character holding a sheathed sword with Hindi text translated as stating, “[I]f the tongue of the kafir starts against the Prophet, then the sword should be taken out of the sheath.” The post also compared French President Emmanuel Macron to the devil and called for a boycott of French products. The Board decided that the post likely would not cause harm.


 

  • In April 2021, the Board upheld the removal of a Facebook post by a Dutch Facebook user containing a 17-second video of three adults and a child wearing traditional Dutch Christmas costumes. These costumes included two white adults dressed as Zwarte Piet (Black Pete). Their faces were painted black, and they wore Afro wigs. The Board decided that although the cultural tradition is not intentionally racist, use of blackface is a common racist trope.


These examples are but a tiny fraction of the innumerable brain-numbing conflicts with which social media sites are confronted every day.

Facebook’s and Instagram’s Community Standards

Facebook’s Community Standard on Dangerous Individuals and Organizations prohibits “content that praises, supports, or represents events that Facebook designates as terrorist attacks, hate events, mass murders or attempted mass murders, serial murders, hate crimes and violating events.” It also prohibits “content that praises any of the above organizations or individuals or any acts committed by them,” referring to hate organizations and criminal organizations, among others. Instagram’s Community Guidelines state that “Instagram is not a place to support or praise terrorism, organized crime, or hate groups,” and provide a link to the Dangerous Individuals and Organizations Community Standard.

Facebook’s Community Standard on Violence and Incitement states that it “remove[s] content, disable[s] accounts, and work[s] with law enforcement when [it] believe[s] there is a genuine risk of physical harm or direct threats to public safety.” The Standard specifically prohibits “[s]tatements advocating for high-severity violence” and “[a]ny content containing statements of intent, calls for action, conditional or aspirational statements, or advocating for violence due to voting, voter registration or the administration or outcome of an election.” It also prohibits “[m]isinformation and unverifiable rumors that contribute to the risk of imminent violence or physical harm.” Instagram’s Community Guidelines state that Facebook removes content that contains “credible threats” and “serious threats of harm to public and personal safety aren’t allowed.” Both sections include links to the Violence and Incitement Community Standard.

Facebook’s Terms of Service state that Facebook “may suspend or permanently disable access” to an account if it determines that a user has “clearly, seriously or repeatedly” breached its terms or policies. The introduction to the Community Standards notes that “consequences for violating our Community Standards vary depending on the severity of the violation and the person's history on the platform.”

Instagram’s Terms of Use state that Facebook

can refuse to provide or stop providing all or part of the Service to you (including terminating or disabling your access to the Facebook Products and Facebook Company Products) immediately to protect our community or services, or if you create risk or legal exposure for us, violate these Terms of Use or our policies (including our Instagram Community Guidelines).


Instagram’s Community Guidelines state “[o]verstepping these boundaries may result in deleted content, disabled accounts, or other restrictions.”

The Board examined various values outlined in the introduction to the Community Standards, which Facebook claims guide what is allowed on its platforms. Facebook describes “Voice” as wanting “people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. […] Our commitment to expression is paramount, but we recognize that the Internet creates new and increased opportunities for abuse.” Facebook describes “Safety” as its commitment to “mak[e] Facebook a safe place” and states that “[e]xpression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook.” Facebook describes “Dignity” as its belief that “all people are equal in dignity and rights” and states that it “expect[s] that people will respect the dignity of others and not harass or degrade others.”

In addition to these values, on March 16, 2021, Facebook announced its corporate human rights policy, where it commemorated its commitment to respecting rights in accordance with the UN Guiding Principles on Business and Human Rights (UNGPs). The Board ruled that as “a global corporation committed to the UNGPs, Facebook must respect international human rights standards wherever it operates” and the Board “is called to evaluate Facebook’s decision in view of international human rights standards as applicable to Facebook.”

Trump v. Facebook

Fully aware of the gravity of the moment, the Board began its May 5 decision to ban Trump by declaring:

Elections are a crucial part of democracy. They allow people throughout the world to govern and to resolve social conflicts peacefully. In the United States of America, the Constitution says the president is selected by counting electoral college votes. On January 6, 2021, during the counting of the 2020 electoral votes, a mob forcibly entered the Capitol where the electoral votes were being counted and threatened the constitutional process. Five people died and many more were injured during the violence.


The Board described in detail the role Trump played on Facebook and elsewhere leading up to the January 6 insurrection. In particular, it noted the following specific posts on the Trump Facebook page:

  • On December 19, 2020, Trump declared, “Statistically impossible to have lost the 2020 Election. Big protest in D.C. on January 6th. Be there, will be wild!”

  • On January 1, 2021, he declared on Facebook: “The BIG Protest Rally in Washington, D.C., will take place at 11.00 A.M. on January 6th. Locational details to follow. StopTheSteal!”

  • On the morning of January 6, Trump attended a rally near the White House, and, as stated in the Board’s decision, he “continued to make unfounded claims that he won the election and suggested that Vice President Mike Pence should overturn President-elect Joe Biden’s victory, a power Mr. Pence did not have. He also stated, ‘we will stop the steal,’ and ‘we’re going to the Capitol.’”


During the subsequent attack on the Capitol, “Trump posted a video and a statement to his Facebook page (which had at least 35 million followers), and the video was also shared to his Instagram account (which had at least 24 million followers).” The posts stated the 2020 election was “stolen” and “stripped away.” The posts also “praised and supported those who were at the time rioting inside the Capitol, while also calling on them to remain peaceful.” In the one-minute video, posted at 4:21 p.m. EST, as the riot raged, Trump said:

I know your pain. I know you’re hurt. We had an election that was stolen from us. It was a landslide election, and everyone knows it, especially the other side, but you have to go home now. We have to have peace. We have to have law and order. We have to respect our great people in law and order. We don’t want anybody hurt. It’s a very tough period of time. There’s never been a time like this where such a thing happened, where they could take it away from all of us, from me, from you, from our country. This was a fraudulent election, but we can't play into the hands of these people. We have to have peace. So go home. We love you. You're very special. You've seen what happens. You see the way others are treated that are so bad and so evil. I know how you feel. But go home and go home in peace.


Facebook removed this post at 5:41 p.m. EST for violating its Community Standard on Dangerous Individuals and Organizations.

Trump posted the following written statement at 6:07 p.m. EST, as police were securing the Capitol:

These are the things and events that happen when a sacred landslide election victory is so unceremoniously viciously stripped away from great patriots who have been badly unfairly treated for so long. Go home with love in peace. Remember this day forever!


At 6:15 p.m. EST, Facebook removed this post for violating its Community Standard on Dangerous Individuals and Organizations. It then imposed a 24-hour block on Trump’s ability to post on Facebook or Instagram.

On January 7, after reviewing Trump's posts, his communications off Facebook, and additional information about the severity of the insurrection’s violence, Facebook extended the block “indefinitely and for at least the next two weeks until the peaceful transition of power is complete.” Facebook cited Trump’s “use of our platform to incite violent insurrection against a democratically elected government.”

The Board noted that in the days following January 6, some of the participants in the riot stated publicly that they did so because the president was telling them to. One participant was quoted in the Washington Post: “I thought I was following my president. […] He asked us to fly there. He asked us to be there. So I was doing what he asked us to do.” One video captured a rioter on the steps of the Capitol screaming at a police officer, “We were invited here! We were invited by the president of the United States!”

The Board oberved that in addition to the two posts on January 6, “Facebook previously found five violations of its Community Standards in organic content posted on the Donald J. Trump Facebook page, three within the last year.”

Trump Argues for Reinstatement on Facebook

The Board explained that in keeping with its process, a statement was submitted on Trump’s behalf through the American Center for Law and Justice (ACLJ). The statement asserted that the posts in question “called for those present at and around the Capitol that day to be peaceful and law abiding, and to respect the police” and that it is “inconceivable that either of those two posts can be viewed as a threat to public safety, or an incitement to violence.” It also stated that “[i]t is stunningly clear that in his speech there was no call to insurrection, no incitement to violence, and no threat to public safety in any manner,” and described a “total absence of any serious linkage between the Trump speech and the Capitol building incursion.”

The ACLJ argued that because “nothing Mr. Trump said to the rally attendees could reasonably be interpreted as a threat to public safety,” Facebook’s basis for imposing restrictions aren’t safety-related. It also stated that “any content suspected of impacting safety must have a direct and obvious link to actual risk of violence.” The statement also contended that the terms “fight” or “fighting” used at the rally speech “were linked to a call for lawful political and civic engagement.” Thus, the ACLJ asserted that “those words were neither intended, nor would be believed by any reasonable observer or listener to be a call for violent insurrection or lawlessness,” and that “all genuine Trump political supporters were law-abiding” and that the attack was “most probably ignited by outside forces.” It cites the Oath Keepers as one such organization.

Specifically, the ACLJ argued the Dangerous Organizations and Individuals Community Standard was not violated by Trump’s rally speech because “none of those categories fit this case” and “Mr. Trump’s political speech on January 6th never ‘proclaim[ed] a violent mission,’ a risk that lies at the very center of the Facebook policy.” It also stated the Violence and Incitement Community Standard doesn’t support the suspension of the Trump’s account because the two posts “merely called for peace and safety” and “none of the words in Mr. Trump’s speech, when considered in their true context, could reasonably be construed as incitement to violence or lawlessness.” The ACLJ also cited Facebook’s referral to the Board mentioning the “peaceful transfer of power” and argued that this “new ad hoc rule on insuring [sic] peaceful governmental transitions is not just overly vague, it was non-existent until after the events that Facebook used to justify it.”

In conclusion, ACLJ put forth the argument that the Board should “defer to American law in this appeal” and discussed international law standards for restricting the right to freedom of expression. The statement cited protection of hyperbole and false statements of fact and Facebook’s importance to public discourse. It stated that “employing content decisions based on what seems ‘reasonable,’ or how a ‘reasonable person’ would react to that content is not enough.” It demanded that Facebook should “consider a much higher bar.” It also discussed constitutional standards for incitement to violence. It stated that preserving public safety is a legitimate aim, but Trump’s speech did not present any safety concerns, and it called the penalty disproportionate.

Facebook Explains Why It Removed Trump

In its defense, Facebook explained to the Board that it removed the two January 6 posts because they violated its “policy prohibiting praise, support, and representation of designated Violent Events” and its policy prohibiting praise of individuals “who have engaged in acts of organized violence.”

Facebook noted that its assessment reflected both the letter of its policy and the surrounding context in which the statements were made, including the ongoing violence at the Capitol. It said that while Trump did ask people to “go home in peace,” he also repeated allegations that the election was a fraud and suggested a common purpose in saying, “I know how you feel.” Given the instability at the time of his comments, Facebook decided that “We love you. You’re very special” was meant to praise the people who were breaking the law. Facebook also believed that the second post praised the event, as Trump referred to those who stormed the Capitol as “great patriots,” while urging people to “[r]emember this day forever.”

Facebook noted that it often limits the functionality of Facebook pages and Facebook and Instagram accounts that repeatedly violate its policies and/or severely violate its policies. When it concludes that there is an “urgent and serious safety risk,” Facebook “goes beyond its standard enforcement protocols to take stronger actions against users and pages engaged in violating behavior.” It stated that it

evaluates all available enforcement tools, including permanent bans, before deciding which is the most appropriate to employ in the unique circumstance. In cases where Facebook must make an emergency decision that has widespread interest, it endeavors to share its decision and its reasoning with the public, often through a post in its Newsroom.


Facebook stated that, following its usual enforcement procedure, it first imposed a 24-hour stop on Trump’s ability to post from the Facebook page and Instagram account. After again assessing the situation and emerging details of the Capitol violence, Facebook decided that the one-day ban wasn’t enough to address “the risk that Trump would use his Facebook and Instagram presence to contribute to a risk of further violence.”

Facebook explained that it maintained the indefinite suspension after Biden’s inauguration based in part on the National Terrorism Advisory System Bulletin issued on January 27 by the Department of Homeland Security (DHS) that described a “heightened threat environment across the United States, which DHS believes will persist in the weeks following the successful Presidential Inauguration” and that “drivers to violence will remain through early 2021 and some [Domestic Violent Extremists] may be emboldened by the January 6, 2021, breach of the U.S. Capitol Building in Washington, D.C. to target elected officials and government facilities.” Facebook noted that even once the risk of violence is not as high, it may be appropriate to permanently block Trump’s ability to post based on the severity of his violations on January 6, his continued insistence that the 2020 election was stolen, his sharing of other misinformation, and the fact that he isn’t the president anymore.

Facebook stated that its decision was “informed by Article 19 of the ICCPR, and U.N. General Comment No. 34 on freedom of expression, which permits necessary and proportionate restrictions of freedom of expression in situations of public emergency that threatens the life of the nation.” Facebook noted that it also considered the six contextual factors from the Rabat Plan of Action on the prohibition of advocacy of national, racial, or religious hatred. The Rabat Plan

was developed by experts with the support of the United Nations to guide states in addressing when advocacy of racial, religious or national hatred that incites discrimination, hostility or violence is so serious that resort to state-imposed criminal sanctions is appropriate, while protecting freedom of expression, in line with states’ obligations under Article 19 and Article 20, para. 2 of the ICCPR.


Finally, Facebook argued that the events of January 6 represented an “unprecedented threat to the democratic processes and constitutional system of the United States.” While Facebook maintained that it attempts to “act proportionately and accountably in curtailing public speech, given the unprecedented and volatile circumstances, Facebook believes it should retain operational flexibility to take further action including a permanent ban.”

Facebook Refuses to Respond to Certain Questions Posed by the Board

The Board asked Facebook 46 questions. It is remarkable and revealing that Facebook declined to answer seven entirely, and two partially, including questions about “how Facebook’s news feed and other features impacted the visibility of Trump’s content; whether Facebook has researched, or plans to research, those design decisions in relation to the events of January 6, 2021; and information about impermissible content from followers of Trump’s accounts.”

Facebook also refused to answer questions related to the suspension of other political figures and removal of other content; whether other political officeholders or their staff had contacted Facebook about the suspension of Trump’s accounts; and whether account suspension or deletion affects the ability of advertisers to target the accounts of followers. Facebook stated that this information

was not reasonably required for decision-making in accordance with the intent of [the Oversight Board’s] Charter; was not technically feasible to provide; was covered by attorney/client privilege; and/or could not or should not be provided because of legal, privacy, safety, or data protection concerns.


The fact that the rules under which the Board operates do not give it any real power to force Facebook to answer relevant questions reveals a gaping hole in the entire process, undermining the claim that the Board is independent.

The Oversight Board Issues Its Rulings

The Board received 9,666 public comments from around the world related to this case, 97 percent from the United States and Canada. All of the comments are available on the Oversight Board’s website.

The Board concluded that Facebook’s decision that the two posts by Trump on January 6 violated its Community Standards was correct.

At the time the posts were made, the violence at the Capitol was underway. Both posts praised or supported people who were engaged in violence. The words “We love you. You’re very special” in the first post and “great patriots” and “remember this day forever” in the second post amounted to praise or support of the individuals involved in the violence and the events at the Capitol that day.


While the Board observed that other Community Standards may have been violated, including the Standard on Violence and Incitement, because Facebook’s decision wasn’t based on this standard and an additional finding of violation wouldn’t affect the outcome, “a majority of the Board refrained from reaching any judgment on this alternative ground.”

Fortunately, it is on record that a minority of the Board would consider the additional ground and find that the Violence and Incitement Standard was violated. The minority would

hold that, read in context, the posts stating the election was being ‘stolen from us’ and was “so unceremoniously viciously stripped,” coupled with praise of the rioters, qualifies as ‘calls for actions,” “advocating for violence” and “misinformation and unverifiable rumors that contribute[d] to the risk of imminent violence or physical harm” prohibited by the Violence and Incitement Community Standard.


It is disappointing that these findings did not command the endorsement of the entire Board.

The Board emphasized that Trump “praised and supported people involved in a continuing riot where people died, lawmakers were put at serious risk of harm, and a key democratic process was disrupted.” Moreover, at the time when these restrictions were extended on January 7, the situation was fluid and serious safety concerns remained. “Given the circumstances, restricting Mr. Trump’s access to Facebook and Instagram past January 6 and 7 struck an appropriate balance in light of the continuing risk of violence and disruption.”

The Board noted that its analysis did not go against Facebook’s stated values of “Voice” and “Safety” and that “the protection of public order justified limiting freedom of expression.” A minority believed it is important to emphasize that “Dignity” was also relevant, as Facebook relates “Dignity” to equality and that people should not “harass or degrade” other people. The minority thought that some of Trump’s previous posts “contributed to racial tension and exclusion and that this context was key to understanding the impact of Mr. Trump’s content.” Having looked at this case on other grounds, the majority did not have any comment on these posts. Here again, it is regrettable that the entire Board did not see fit to adopt this minority position.

The Board found that

Facebook has become a virtually indispensable medium for political discourse, and especially so in election periods. It has a responsibility both to allow political expression and to avoid serious risks to other human rights. Facebook, like other digital platforms and media companies, has been heavily criticized for distributing misinformation and amplifying controversial and inflammatory material. Facebook’s human rights responsibilities must be understood in the light of those sometimes competing considerations.


In a particularly important passage, the Board stated that it

does not apply the First Amendment of the U.S. Constitution, which does not govern the conduct of private companies. However, the Board notes that in many relevant respects the principles of freedom of expression reflected in the First Amendment are similar or analogous to the principles of freedom of expression in ICCPR Article 19.


(Article 19 of the ICCPR states that “everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.”)

The Board acknowledged that “[p]olitical speech receives high protection under human rights law because of its importance to democratic debate,” and Facebook’s decision to suspend Trump’s Facebook page and Instagram account

has freedom of expression implications not only for Mr. Trump but also for the rights of people to hear from political leaders, whether they support them or not. Although political figures do not have a greater right to freedom of expression than other people, restricting their speech can harm the rights of other people to be informed and participate in political affairs.


However, the Board pointed out that “international human rights standards expect state actors to condemn violence (Rabat Plan of Action), and to provide accurate information to the public on matters of public interest, while also correcting misinformation.”

According to the decision, the

clarity of the Standard against praise and support of Dangerous Individuals and Organizations leaves much to be desired, as the Board noted in a prior decision. […] The UN Special Rapporteur on Freedom of Expression has also raised concerns about the vagueness of the Dangerous Individuals and Organizations Standard.


However, “any vagueness under the terms of the Standard does not render its application to the circumstances of this case doubtful.” The Board decided that

[t]he January 6 riot at the Capitol fell squarely within the types of harmful events set out in Facebook’s policy, and Mr. Trump’s posts praised and supported those involved at the very time the violence was going on, and while Members of Congress were calling on him for help. In relation to these facts, Facebook’s policies gave adequate notice to the user and guidance to those enforcing the rule.


Nevertheless, the Board found Facebook’s imposition of an “indefinite” restriction to be “vague and uncertain.” According to the Board,

“Indefinite” restrictions are not described in the Community Standards and it is unclear what standards would trigger this penalty or what standards will be employed to maintain or remove it. Facebook provided no information of any prior imposition of indefinite suspensions in any other cases. The Board recognizes the necessity of some discretion on Facebook’s part to suspend accounts in urgent situations like that of January [6], but users cannot be left in a state of uncertainty for an indefinite time.


Furthermore, the Board suggested that limits on discretionary powers are needed “to distinguish the legitimate use of discretion from possible scenarios around the world in which Facebook may unduly silence speech not linked to harm or delay action critical to protecting people.”

There is a remarkable section of the decision that is sure to perpetuate criticism of Facebook. The Board pointed out that according to Facebook’s statement, Trump’s “repeated use of Facebook and other platforms to undermine confidence in the integrity of the election (necessitating repeated application by Facebook of authoritative labels correcting the misinformation) represented an extraordinary abuse of the platform.” But when the Board sought clarification from Facebook about “the extent to which the platform’s design decisions, including algorithms, policies, procedures and technical features, amplified Mr. Trump’s posts after the election and whether Facebook had conducted any internal analysis of whether such design decisions may have contributed to the events of January 6,” regrettably Facebook declined to answer these questions. The Board admonished Facebook because its refusal to respond “makes it difficult for the Board to assess whether less severe measures, taken earlier, may have been sufficient to protect the rights of others.” Yet again, Facebook’s dismissive attitude toward the Board’s legitimate inquiries reinforces the criticism that Facebook doesn’t truly embrace transparency and refuses to examine how the very structure of its platforms contributes to events like January 6.

To decide whether Facebook’s decision was necessary and proportionate to protect the rights of others, the Board assessed Trump’s posts and off-platform comments since the November election:

In maintaining an unfounded narrative of electoral fraud and persistent calls to action, Mr. Trump created an environment where a serious risk of violence was possible. On January 6, Mr. Trump’s words of support to those involved in the riot legitimized their violent actions. Although the messages included a seemingly perfunctory call for people to act peacefully, this was insufficient to defuse the tensions and remove the risk of harm that his supporting statements contributed to. It was appropriate for Facebook to interpret Mr. Trump’s posts on January 6 in the context of escalating tensions in the United States and Mr. Trump’s statements in other media and at public events.


Importantly, a minority of the Board found that while a suspension for some time or permanent disablement could be justified on the basis of the January 6 events alone, the proportionality analysis should also have included Trump’s “use of Facebook’s platforms prior to the November 2020 presidential election.” The minority specifically cited the May 28, 2020, post “when the looting starts, the shooting starts,” made in the context of protests for racial justice after George Floyd’s murder, as well as multiple posts referencing the “China Virus.” For the minority, this broader analysis would have been

crucial to inform Facebook’s assessment of a proportionate penalty on January 7, which should [have served] as both a deterrent to other political leaders and, where appropriate, an opportunity of rehabilitation. Further, if Facebook opted to impose a time-limited suspension, the risk-analysis required prior to reinstatement should also take into account these factors. Having dealt with this case on other grounds, the majority [did] not comment on these matters.


Once again, the fact that the entire Board did not adopt these minority findings casts doubt on how the rest of the Board views its responsibilities.

In conclusion, the Board found that on January 6, Facebook’s decision to restrict Trump’s accounts was justified. The Board reasoned that because the posts in question violated the Facebook and Instagram rules that prohibit support or praise of violating events, including the riot that was then underway at the US Capitol. However, the Board found that it was not appropriate for Facebook to impose an “indefinite” suspension.

Facebook did not follow a clear published procedure in this case. Facebook’s normal account-level penalties for violations of its rules are to impose either a time-limited suspension or to permanently disable the user’s account. The Board finds that it is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.


According to the Board, it is “Facebook’s role to create and communicate necessary and proportionate penalties that it applies in response to severe violations of its content policies.” In applying an indeterminate and standardless penalty and then referring this case to the Board to resolve, “Facebook seeks to avoid its responsibilities. The Board declines Facebook’s request and insists that Facebook apply and justify a defined penalty.”

Consequently,

Facebook must, within six months of this decision, reexamine the arbitrary penalty it imposed on January 7 and decide the appropriate penalty. This penalty must be based on the gravity of the violation and the prospect of future harm. It must also be consistent with Facebook’s rules for severe violations which must in turn be clear, necessary, and proportionate.


A minority believed that it is important to outline some minimum criteria that reflect the Board’s assessment of Facebook’s human rights responsibilities. Unfortunately, the majority preferred instead to provide this guidance only as policy recommendations, which are non-binding. The minority explicitly noted that Facebook’s responsibilities to respect human rights include facilitating the remediation of adverse human rights impacts to which it has contributed. To fulfill its responsibility to guarantee that the adverse impacts are not repeated, the minority believed that “Facebook must assess whether reinstating Mr. Trump’s accounts would pose a serious risk of inciting imminent discrimination, violence or other lawless action.”

According to the minority, Facebook should, for example, “be satisfied that Mr. Trump has ceased making unfounded claims about election fraud in the manner that justified suspension on January 6.” Facebook’s enforcement procedures aim to be rehabilitative. The minority emphasized that Facebook’s rules should ensure that “users who seek reinstatement after suspension recognize their wrongdoing and commit to observing the rules in the future.” In this case, the minority suggested that, “before Trump’s account can be restored, Facebook must also aim to ensure the withdrawal of praise or support for those involved in the riots.” It is inexplicable why the entire Board was unable to fully embrace these sensible minority views.

The Board Addresses the Issue of Political Leaders

In its referral to the Board, Facebook specifically requested “observations or recommendations from the board about suspensions when the user is a political leader.” The Board responded that it

believes that it is not always useful to draw a firm distinction between political leaders and other influential users. It is important to recognize that other users with large audiences can also contribute to serious risks of harm. The same rules should apply to all users of the platform; but context matters when assessing issues of causality and the probability and imminence of harm. What is important is the degree of influence that a user has over other users.


While all users should be held to the same content policies, the Board pointed out that

[h]eads of state and other high officials of government can have a greater power to cause harm than other people. Facebook should recognize that posts by heads of state and other high officials of government can carry a heightened risk of encouraging, legitimizing, or inciting violence — either because their high position of trust imbues their words with greater force and credibility or because their followers may infer they can act with impunity.


At the same time, while the Board considered it important to protect the rights of people to hear political speech,

if the head of state or high government official has repeatedly posted messages that pose a risk of harm under international human rights norms, Facebook should suspend the account for a determinate period sufficient to protect against imminent harm. Periods of suspension should be long enough to deter misconduct and may, in appropriate cases, include account or page deletion.


When Facebook implements special procedures that apply to influential users, these should be well documented. The Board found it “unclear whether Facebook applied different standards in this case, and the Board heard many concerns about the potential application of the newsworthiness allowance. It is important that Facebook address this lack of transparency and the confusion it has caused.”

In a rather telling observation, after noting that “Facebook’s platform has been abused by influential users in a way that results in serious adverse human rights impacts,” the Board recommended that Facebook “assess what influence it had and assess what changes it could enact to identify, prevent, mitigate, and account for adverse impacts in future.” In particular, “Facebook should undertake a comprehensive review of its potential contribution to the narrative of electoral fraud and the exacerbated tensions that culminated in the violence in the United States on January 6, 2021.” Unfortunately, these strong statements were made in the form of recommendations that are not binding on Facebook.

The Board highlighted further deficiencies in Facebook’s policies.

In particular, the Board finds that Facebook’s penalty system is not sufficiently clear to users and does not provide adequate guidance to regulate Facebook’s exercise of discretion. Facebook should explain in its Community Standards and Guidelines its strikes and penalties process for restricting profiles, pages, groups and accounts on Facebook and Instagram in a clear, comprehensive, and accessible manner.


Finally, the Board urged Facebook to

develop and publish a policy that governs its response to crises or novel situations where its regular processes would not prevent or avoid imminent harm. While these situations cannot always be anticipated, Facebook’s guidance should set appropriate parameters for such actions, including a requirement to review its decision within a fixed time.


Reactions to the Decision

The Oversight Board’s May 5 decision met with a barrage of reactions, mostly negative. In a statement, which he no doubt would have loved to post on Facebook, Trump said: “Free Speech has been taken away from the President of the United States because the Radical Left Lunatics are afraid of the truth.” The “Real Facebook Oversight Board,” a group of leading Facebook critics, issued a statement headlined “Facebook Oversight Board Proves it’s Pointless,” in which it accused the Board of having “no legitimacy to make real decisions” and claiming that Facebook’s “attempt to divert attention from its fundamental failure to take responsibility for what’s on its own platform has itself failed.”

Senator Elizabeth Warren (D-Massachusetts) labeled Facebook “a disinformation-for-profit machine that won’t accept responsibility for its role in the safety of our democracy and people.” Former House Speaker Newt Gingrich called Trump the “big winner from Facebook’s insane decision to ban an American who received 75 million votes,” and predicted that Trump will be seen as “a martyr attacked by social media oligarchies. 75% want these companies regulated, 68% want free speech guaranteed.”

According to the Washington Post’s Editorial Board, the Oversight Board “made the right decision” and now “it is Facebook’s turn to make the right decision too.” The Board’s “job is not to write Facebook’s rules but rather to interpret them.” The Post anticipated that if Facebook adopted the Board’s recommendation to establish “a clear and consistent system for disciplining offenders that corresponds with the hazard they pose — including the final sanction of permanent exile,” given Trump’s perpetuation of the “Big Lie” — that system “would properly bar Mr. Trump’s return.” “Deliberate attempts to subvert democracy” should “carry the steepest costs.”

Facebook Bans Trump for Two Years

The most important reaction to the Oversight Board’s decision came from Facebook itself. Without waiting six months, on June 4 Facebook imposed a specific two-year ban on Trump. In addition, it implemented most of the Board’s recommendations. While the newsworthiness of a post can be considered in deciding whether to remove a post that violates Community Standards, Facebook “will not treat content posted by politicians any differently from content posted by anyone else.” However, it will “prioritize safety over expression when taking action on a threat of harm from influential users” and it will suspend accounts of “high government officials” if such posts repeatedly pose a risk of harm. On the other hand, for the sake of free speech, Facebook will “resist pressure from governments to silence their political opposition.”

Responding to serious concerns expressed by the Board, Facebook agreed to “review its potential role in the election fraud narrative that sparked violence in the United States on January 6, 2021 and report on its findings.” To better understand the effect Facebook and Instagram have on elections, Facebook announced it would form a partnership with nearly 20 outside academics to study this issue to “combat misinformation and hate” and report on its findings.

The unprecedented decisions by Facebook to ban a political figure of immense proportions and by the Oversight Board to reject Trump’s appeal from that decision are remarkable. These are decisions of private entities made in uncharted waters to navigate fundamental issues involving freedom of expression and political power. Facebook has demonstrated how a powerful private social media platform can rationally justify and transparently explain its decision to hold a powerful political leader accountable for his vocal and incendiary support for a lawless and violent attack on the nation’s capital by denying him the privilege of using that platform to wreak further havoc on our democracy.

The gravity of these decisions may best be appreciated by imagining the opposite outcome. Had Facebook decided not to suspend Trump on January 7, 2021, or had the Oversight Board decided to overturn that decision at Trump’s request, or had Facebook not decided to formally ban Trump for a full two years until January 7, 2023 — after the 2022 midterm elections — our democracy would be in far worst trouble than it is. Imagine if the 2.8 billion monthly active Facebook users had been exposed to Trump’s inflammatory lies multiple times a day since January 7, 2021, every single day now and throughout the midterms elections and beyond.

This was possible because Facebook is a private service not subject to government control. Indeed, the decisions it has made — using a variety of subjective standards based on the content of Trump’s speech — would run afoul of First Amendment principles had they been made by a government entity. Critics on the left and the right should keep this in mind when they call for government regulation of social media platforms such as Facebook.

Neither Facebook nor its Oversight Board are without their problems. But in this moment in history, they have found a principled and transparent way to deal with a serious threat to democracy and democracy is better for it.

But when it comes to free speech, we need to be ever cautious. What if Facebook — or a future social media platform with 2.8 billion or more monthly active users — was owned by a staunch ally of Trump or Trump himself or the next Trump?

Eventually, social media platforms, like newspapers and other sources of information, will come and go. Ultimately, there is no substitute for a highly informed and deeply skeptical public. Today, that is missing and that is a serious threat to democracy.

¤


Stephen Rohde is a retired constitutional lawyer, lecturer, writer, and political activist.

LARB Contributor

Stephen Rohde is a writer, lecturer, and political activist. For almost 50 years, he practiced civil rights, civil liberties, and intellectual property law. He is a past chair of the ACLU Foundation of Southern California and past National Chair of Bend the Arc, a Jewish Partnership for Justice. He is a founder and current chair of Interfaith Communities United for Justice and Peace, member of the Board of Directors of Death Penalty Focus, and a member of the Black Jewish Justice Alliance. Rohde is the author of American Words of Freedom and Freedom of Assembly (part of the American Rights series), and numerous articles and book reviews on civil liberties and constitutional history for Los Angeles Review of BooksAmerican ProspectLos Angeles Times, Ms. Magazine, Los Angeles Lawyer, Truth Out, LA Progressive, Variety, and other publications. He is also co-author of Foundations of Freedom, published by the Constitutional Rights Foundation. Rohde received Bend the Arc’s “Pursuit of Justice” Award, and his work has been recognized by the ACLU and American Bar Association. Rohde received his BA degree in political science from Northwestern University and his JD degree from Columbia Law School. 

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!