Techno-Fundamentalism Can’t Save You, Mark Zuckerberg

Article from the New Yorker
https://www.newyorker.com/tech/elements/techno-fundamentalism-cant-save-you-mark-zuckerberg

By      April 21, 2018

vaidhyanathan-techno-fundamentalism-cant-save-you-mark-zuckerberg
For years, the Facebook C.E.O. has clung to the belief that new technology can solve the problems caused by old technology. But that philosophy is what got us into our current mess. Illustration by Erik Carter.

It was like a verbal tic. Last week, in two days of testimony before Congress, Mark Zuckerberg, the C.E.O. of Facebook, invoked a magical-sounding phrase whenever he was cornered about a difficult issue. The issue was content moderation, and the phrase was “artificial intelligence.” In 2004, Zuckerberg explained, when Facebook got its start, it was just him and a friend in his dorm room at Harvard. “We didn’t have A.I. technology that could look at the content that people were sharing,” he told the Senate Commerce and Judiciary Committees. “So we basically had to enforce our content policies reactively.” In the fourteen years since, the platform has grown to 2.2 billion monthly active users; they speak more than a hundred languages, each with its own subtle variations on hate speech, sexual content, harassment, threats of violence and suicide, and terrorist recruitment. Facebook’s staggering size and influence, Zuckerberg admitted, along with a slew of high-profile scandals, had made clear that “we need to take a more proactive role and a broader view of our responsibility.” He pledged to hire many thousands of human content-reviewers around the world, but he seemed to see A.I. as the ultimate panacea. In all, he uttered the phrase more than thirty times.

Tarleton Gillespie, in his forthcoming book “Custodians of the Internet,” explains what’s at the root of Zuckerberg’s problem:

Moderation is hard because it is resource intensive and relentless; because it requires difficult and often untenable distinctions; because it is wholly unclear what the standards should be; and because one failure can incur enough public outrage to overshadow a million quiet successes.

Should the values of a C.E.O. outweigh those of an engineer or an end user? If, as Zuckerberg stated before Congress, some sort of “community standards” apply, what constitutes a “community”? For Facebook in Iraq, should it be Kurdish standards or Shia standards? And what, exactly, are Sunni standards? In Illinois, should it be rural standards or urban standards? Imagine trying to answer these questions across a platform as vast as Facebook. Imagine trying to hire, train, and retain value judges in places such as Myanmar, where the Buddhist majority is waging a brutal campaign of expulsion and oppression against the Rohingya, a Muslim minority group. Imagine finding moderators for all eleven of South Africa’s official languages.

Hiring more humans, if there are even enough of them, won’t solve these problems—nor is it likely to be good for the humans themselves. Sarah Roberts, an information scholar at the University of California, Los Angeles, has interviewed content moderators throughout Silicon Valley and beyond, and she reports that many are traumatized by the experience and work for low wages without benefits. But Zuckerberg’s A.I. solution, which he sees becoming a reality “over a five-to-ten year period,” is equally untenable. It’s like Mark Twain’s Connecticut Yankee, Hank Morgan, fooling the people of Camelot with his technocratic “magic.” But, more crucial, it’s also an expression of techno-fundamentalism, the unshakable belief that one can and must invent the next technology to fix the problem caused by the last technology. Techno-fundamentalism is what has landed us in this trouble. And it’s the wrong way to get us out.

The main selling point of automated content moderation is that it purports to sidestep the two hurdles that thwart humans: scale and subjectivity. For a machine that learns from historical experience—“This is an example of what we want to flag for review; this is not”—scale is an advantage. The more data it consumes, the more accurate its judgments supposedly become. Even mistakes, when identified as mistakes, can refine the process. Computers also like rules, which is why artificial intelligence has seen its greatest successes in highly organized settings, such as chess matches and Go tournaments. If you combine rules and lots of historical data, a computer can even win at “Jeopardy!”—as one did in 2011. At first, the rules must be developed by human programmers, but there is some hope that the machines will refine, revise, and even rewrite the rules over time, accounting for diversity, localism, and changes in values.

This is where the promise of artificial intelligence breaks down. At its heart is an assumption that historical patterns can reliably predict future norms. But the past—even the very recent past—is full of words and ideas that many of us now find repugnant. No system is deft enough to respond to the rapidly changing varieties of cultural expression in a single language, let alone a hundred. Slang is fleeting yet powerful; irony is hard enough for some people to read. If we rely on A.I. to write our rules of conduct, we risk favoring those rules over our own creativity. What’s more, we hand the policing of our discourse over to the people who set the system in motion in the first place, with all their biases and blind spots embedded in the code. Questions about what sorts of expressions are harmful to ourselves or others are difficult. We should not pretend that they will get easier.

What, then, is the purpose of Zuckerberg’s A.I. incantation? To take the cynical view, it offers a convenient way to defer public scrutiny: Facebook is a work in progress, and waiting for the right tools to be developed will take patience. (Once those tools are in place, of course, the company can blame any flubs on flawed algorithms or bad data.) But Zuckerberg isn’t a cynic; he’s a techno-fundamentalist, and that’s an equally unhealthy habit of mind. It creates the impression that technology exists outside, beyond, even above messy human decisions and relations, when the truth is that no such gap exists. Society is technological. Technology is social. Tools, as Marshall McLuhan told us more than fifty years ago, are extensions of ourselves. They amplify and distort our strengths and our flaws. That’s why we must design them with care from the start.

The problem with Facebook is Facebook. It has moved too fast. It has broken too many things. It has become too big to govern, whether by a band of humans or a suite of computers. To chart the way forward, Zuckerberg has few effective tools at his disposal. He should be honest about their limitations—if not for his company’s sake then for ours.