Our Censors, Ourselves: Commercial Content Moderation

By David C. BrockJuly 25, 2019

Our Censors, Ourselves: Commercial Content Moderation

Behind the Screen by Sarah T. Roberts

I. Actual Agonies

THERE MAY BE some broad usefulness in giving name to my pet peeve: the wishful-worry. Crafted by writers and talkers of all stripes, wishful-worry scenarios are set in a future at some vague distance from our present and presented as matters for serious concern (“We have to deal with this!”). In truth, they function as pleasant distractions from what one might call our actual agonies. Wishful-worries are problems that it would be nice to have, in contrast to the actual agonies of the present. In my experience, one of the best places to find wishful-worries is in discussions of “tech” aimed at the broadest possible audience. Consider some examples of the genre: When robots and AI do all of the work of people, what will be the right universal basic income? As biotechnology affords dramatically longer human lifespans, how will we fight boredom? With neurotechnology-augmentation rendering some of us essentially superheroes, what ethical dilemmas will we face? How can we protect privacy in an age of tech-enabled telepathy?

That these wishful-worries are a distracting psychological salve — functionally equivalent to “Look! A Unicorn!” — becomes all too clear when they are contrasted with our actual agonies: the rapid collapse of multiple, interconnected ecological systems on which human civilization depends; the unprecedented growth of wealth inequality through the engine of global capitalism, cast as either “with Chinese characteristics” or “with American exceptionalism”; the rise of fascistic movements weaving together nativism, racism, misogyny, homophobia, and climate denialism in service of klepto-plutarchy in Europe, Asia, and the United States. Perhaps it’s no wonder that many of us would rather think about the ethics of sex robots.

Many of these actual agonies are intimately connected to the domain where much of the world’s population spends its waking life. We are connected, more than we are not, to a new layer of electronic reality that we have built atop the old: an interactive multimedia that spans a variety of digital devices, networks, and broadcasts. Social media platforms have assembled a remarkable dragnet for capturing nearly half of the $600 billion or so of the annual global advertising budget. This dragnet allows and encourages users to provide the multimedia content for these platforms; exquisitely surveils users, their interactions, and their content; and sells this surveillance to advertisers through a variety of services. With inventive and ever-changing novelties honed to increase time of use and content contribution, these social media platforms have swept in billions of global users along with the billions in advertising dollars. Ubiquitous smartphones have largely eliminated any barrier to use.

In the United States in the spring of 2019, the actual agonies of social media content made the national headlines. On April 25, Vice News quoted a “technical employee” at Twitter who explained that the platform could not remove white supremacist content and users in the way it had for ISIS-related content and users. Why? The “aggressive” algorithmic practices to eliminate ISIS content also eliminated other perfectly benign content, what might be called false-positives. Twitter could tolerate the damage created by these particular false-positives. Eliminating white supremacist content and users presented a different challenge — the algorithmic practices would identify Republican politicians, partisans, and their content as white supremacist. While some of these identifications would be accurate, others would be false-positives. Twitter itself sagely acknowledged that identifying white supremacist content and users requires “using technology and human review in tandem” for issues that are so very “contextual.” Even with humans in the loop, could Twitter tolerate the resulting furor? Apparently not.

Three weeks later, on May 15, as if to prove the point, the White House used Twitter to announce a new online survey it had created using the freemium-service Typeform.com. The survey solicited individuals to share content, specifically cases in which they perceived that social media companies had acted out of “political bias” in restricting their content or accounts. In proper social media surveillance form, the White House also asked respondents for their names, zip codes, telephone numbers, and email addresses. This constituted valuable information for the targeted political advertising inevitably to follow. Two days later, on May 17, The New York Times featured a sympathetic profile of Facebook’s chief technology officer, Mike Schroepfer, describing how the talented computer scientist was brought to tears by the “toxic” content his firm is trying to remove, and his sense of crushing responsibility.

Less than two weeks passed before another Times piece revealed, on May 28, that Google employed significantly more temporary workers, contractors, and consultants in a “shadow work force” than it did full-time employees: 121,000 as compared to 102,000. The Times reported that this temporary workforce extended far beyond food service, child care, janitorial, and bicycle-fleet maintainers: “Google’s contractors handle a range of jobs, from content moderation to software testing. Their hourly pay varies, from $16 per hour for an entry-level content reviewer to $125 per hour for a top-shelf software developer.”

In the past few years, content moderators, who are paid to make judgments about whether or not user content can remain on social media platforms, have become increasingly visible, due in large part to the work of a professor of Information Studies at UCLA, Sarah T. Roberts. In 2010, she began researching and then writing and speaking about “commercial content moderators,” as she calls them. Moderators have themselves become more vocal as well. For example, on May 8, the Washington Post reported on efforts by content moderators employed by Accenture on behalf of Facebook in Austin, Texas, to push for increased benefits and pay, but also for specific recognition of the psychological harms that this work intrinsically entails. (On June 19, after this review had been written, several moderators broke their NDAs with a Florida Facebook contractor, describing shocking conditions to The Verge.)

II. Behind the Screen

In her crisp and accessible new book, Behind the Screen: Content Moderation in the Shadows of Social Media, Roberts invites us to better understand social media platforms by understanding the often-hidden labor integral to their operation. As she puts it, “any discussion of the nature of the contemporary internet is fundamentally incomplete if it does not address the processes by which certain content created by users is allowed to remain visible and other content is removed, who makes these decisions, how they are made, and whom they benefit.” She reminds us that, first and foremost, these workers are employed by social media firms to serve the firm’s perceived interests. This is the “commercial” in commercial content moderators: “While a better user experience may be an outcome of the work they do, this is always, and ultimately, because that better user experience benefits the company providing the space for participation online.” The recent White House survey was thus misguided — deliberately or genuinely — on exactly this point: “SOCIAL MEDIA PLATFORMS should advance FREEDOM OF SPEECH,” the survey-writers claimed. Instagram is not a New England town meeting.

What the White House got right, however, is the brand message of these social media platforms as online services broadly open to users’ free expression through the sharing of multimedia content. This brand positioning is why, as Roberts details, the realities of content moderation policies, practices, and workers are veiled behind non-disclosure agreements, assertions of proprietary intellectual property, and short-term contractual work arrangements. To reveal that user content is policed in service of the commercial interests of the platform and its advertising customers, would be to damage the brand. The secrecy around content moderation is itself a form of brand protection.

This is not to say that service to the brand cannot simultaneously be understood as “care for the user.” As Roberts details, a 2012 leak of documents from oDesk (now known as Upwork, a freelance gig economy platform) reveals the kinds of horrific materials its contract-worker content moderators remove for Facebook: “animal abuse and mutilation, child abuse (physical and sexual), gore, disturbing racist imagery” as well as warzone violence and hate speech in all its variety. I am certainly grateful for this censorship that protects the brands of Google and Gatorade alike; it protects me as well.

While huge volumes of such user content are automatically expunged through the use of software filters and machine learning algorithms, the sheer scale of user-content uploads means that many cases still slip past the gates, as it were. And in the categories for which the algorithms are of much more limited use — things like hate speech, bullying, and harassment — the volume of material with which content moderators must contend is dizzying.

Roberts provides an excellent taxonomy of the actual places where temporary content moderators labor on sorting through the outpourings of our worst selves. Moderators can be found “in-house,” on location with a firm’s full-time employees, including the small number of full-time staffs involved with content moderation, usually in a management, policy, or technology development role. Despite being on the campuses of the social media giants, these temporary workers do not enjoy the benefits (like health care) or amenities provided to full-time employees. No free yoga classes. No holiday party. Smaller numbers are employed by “boutique” firms, mainly specializing in promoting and policing content relevant to a particular brand across social media platforms, or working on brand protection for a smaller platform, say a dating app.

The rest may be found in the remaining two locations of Roberts’s taxonomy: “Call center” and “Microlabor platform.” In the former, firms specializing in the outsourcing of all manner of business processes, from answering support phone calls to data entry, have expanded into content moderation. Indeed, as Roberts details, many workers in places like India and the Philippines move between jobs on “voice accounts” and “non-voice accounts,” between talking to users to monitoring their content. While talking-jobs generally pay more, some of the workers interviewed by Roberts chose on-screen content moderation to flee the incessant abuse and anger of American callers. In “Microlabor platforms,” such as Amazon’s Mechanical Turk, firms shop out the most atomized tasks of content moderation — “Does this picture show nudity?” “Is this comment hate speech?” “Is this a woman?” — to workers on the platform who perform the acts of judgment for pennies a piece. Workers seldom know for whom they are working. Their ignorance is a strategic feature of the platform’s design.

The richest chapters of Roberts’s book are those centering on her interviews and visits with content moderators in the different settings of her taxonomy: the in-house contract moderators in Silicon Valley at a firm Roberts calls “MegaTech”; the boutique brand-guardians policing comments and content from living and dining rooms across the planet; and the communities of young Filipino moderators employed by business process outsourcers in technology parks near Manila. We learn of the rich diversity of peoples who perform content moderation, how they fit this work into their broader lives, the different experiences they bring to bear on the work, how they contend with the challenges of their labor, and how they cope with what they were paid to see, hear, and read.

Underneath the diversity, however, is a troubling similarity. One of her interviewees, a young college graduate working as an in-house temporary worker in content moderation in Silicon Valley, described the disciplined, timed, monitored, troubling, work on screen:

Our tools are very straightforward. I mean we have what’s called a queue, and whenever anything is flagged it goes into this queue. […] And then we basically get it in batches, and then we just basically, there is no rhyme or reason to it, it’s just you might get a violence video followed by a harassment video followed by two more violence. And then we have a series of hot keys that correspond to our policies. So essentially the way it works out is you get a video, and it’s separated into stills, into little thumbs. We get forty little thumbnails, that way we don’t have to watch the video and we can instantly see “oh, well there’s some genitals” or “there’s a man’s head but he’s not connected to it” … something like that. And we can instantly apply policy. It’s extremely streamlined.


A call-center moderator in Manila pseudonamed John described the labor discipline of the automated queues: “They have a standard like, first you have to handle the tickets for thirty-two seconds. But they, because of the increase in volume, because of the production, they have to cut it for less than fifteen to ten seconds.” Melissa, who worked as boutique brand-defender, described both the emotional toll but also the sense of care that came with content moderation: “I actually cried a lot. And I felt dirty all the time when I was doing that job. […] And you do get very invested in, like, I think I probably took on that protector role … I used to call myself a sin-eater.” In short, whatever the location, there exist common conditions: a streamlined, endless, timed queue; the stress of constantly being asked to make quick decisions about distressing material; a hope that the work is standing sentinel — a caring for others; and of course, the precarity.

III. Ghost Workers in the Machine

Roberts surfaces many more important issues, too. Indeed, her book is a worthy addition to a growing literature exposing the politics, economy, and humanity of digital platforms, algorithms, and technologies. By the humanity of digital technologies, I mean the ways in which the activities and decisions of people are constitutive of the technology’s creation, use, and change. As the saying now goes, “When someone says ‘algorithm,’ start searching for the people.”

The commercial content moderators studied by Roberts who conduct their work through participation on microlabor platforms like Amazon Mechanical Turk are part of a larger story of what Microsoft researchers Mary L. Gray and Siddharth Suri call “Ghost Work” in their new book of that title. In it, they argue that the tremendous growth of Amazon Mechanical Turk results from simultaneously integrating human labor into computer programs themselves, erasing or evading the laborers’ humanity. Amazon Mechanical Turk affords an “API,” an application programming interface, which is a bit of code through which the labor of human workers can be called for, secured, and paid for through the microlabor platform automatically by another computer program. In other words, the MTurk API flattens human employment into just another computational resource that can be called upon by an app. No muss, no fuss. A human labor API is the apotheosis of a longer-term trend across the past 50 years in the evolution of capitalism, corporations, and work — particularly in the United States — detailed by the historian Louis Hyman in his 2018 book Temp: How American Work, American Business, and the American Dream Became Temporary. As he convincingly argues, since the 1970s, the values of security have been replaced with those of risk, and the financialization of the corporation has led to making work temporary.

IV. Our Censors, Ourselves

Most of what commercial content moderators sort through and judge has already been seen by others, and actively flagged as concerning. Users are thus at the very front line of content moderation, and their flagging of material constitutes a kind of collective self-censorship, ultimately out of some sense of care. Through such judgments and accusations, we become partners with the paid censors, participants in the machinery of social media that determines what remains seen and what does not. The collective “we” of social media users are also the victimizers of the content moderators, for not only do we feed the endless review queue that demands their labor but we also originate the hateful and damaging content to which we and they are subjected. We are the ones who express the worst of ourselves online. As Roberts puts it, “we are all implicated.”

As one of Roberts’s interviewees, Max, puts it, “The awful part is, it has to be done. It can’t not be moderated. But there’s no good way to do it.” This agony, unlike labor by API and commercial content moderation at scale, is nothing new. There are many emotionally damaging jobs whose intrinsic risks and potential harms we as a society try to address. I think here of soldiers, and of first responders like EMTs, firefighters, and police. There are of course far more roles for which we fail to give justice: personal care assistants, nurses, child-care workers. We can now add to this list commercial content moderators. Our situation is the actual agony of care. There is no way to do without the care we ask commercial content moderators to provide. Surely we should not shirk our care for them.

¤


David C. Brock is an historian of technology, and director of the Software History Center at the Computer History Museum. He is the co-author of Moore’s Law: The Life of Gordon Moore, Silicon Valley’s Quiet Revolutionary (Basic Books, 2015) and of Makers of the Microchip: A Documentary History of Fairchild Semiconductor (MIT Press, 2010).

LARB Contributor

David C. Brock is an historian of technology, and director of the Software History Center at the Computer History Museum. He is co-author of Moore’s Law: The Life of Gordon Moore, Silicon Valley’s Quiet Revolutionary (Basic Books, 2015) and of Makers of the Microchip: A Documentary History of Fairchild Semiconductor (MIT Press, 2010). Photo courtesy of Chattman Photography. 

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!