Techno-Fundamentalism Can’t Save You, Mark Zuckerberg

Article from the New Yorker

By      April 21, 2018

For years, the Facebook C.E.O. has clung to the belief that new technology can solve the problems caused by old technology. But that philosophy is what got us into our current mess. Illustration by Erik Carter.

It was like a verbal tic. Last week, in two days of testimony before Congress, Mark Zuckerberg, the C.E.O. of Facebook, invoked a magical-sounding phrase whenever he was cornered about a difficult issue. The issue was content moderation, and the phrase was “artificial intelligence.” In 2004, Zuckerberg explained, when Facebook got its start, it was just him and a friend in his dorm room at Harvard. “We didn’t have A.I. technology that could look at the content that people were sharing,” he told the Senate Commerce and Judiciary Committees. “So we basically had to enforce our content policies reactively.” In the fourteen years since, the platform has grown to 2.2 billion monthly active users; they speak more than a hundred languages, each with its own subtle variations on hate speech, sexual content, harassment, threats of violence and suicide, and terrorist recruitment. Facebook’s staggering size and influence, Zuckerberg admitted, along with a slew of high-profile scandals, had made clear that “we need to take a more proactive role and a broader view of our responsibility.” He pledged to hire many thousands of human content-reviewers around the world, but he seemed to see A.I. as the ultimate panacea. In all, he uttered the phrase more than thirty times.

Tarleton Gillespie, in his forthcoming book “Custodians of the Internet,” explains what’s at the root of Zuckerberg’s problem:

Moderation is hard because it is resource intensive and relentless; because it requires difficult and often untenable distinctions; because it is wholly unclear what the standards should be; and because one failure can incur enough public outrage to overshadow a million quiet successes.

Should the values of a C.E.O. outweigh those of an engineer or an end user? If, as Zuckerberg stated before Congress, some sort of “community standards” apply, what constitutes a “community”? For Facebook in Iraq, should it be Kurdish standards or Shia standards? And what, exactly, are Sunni standards? In Illinois, should it be rural standards or urban standards? Imagine trying to answer these questions across a platform as vast as Facebook. Imagine trying to hire, train, and retain value judges in places such as Myanmar, where the Buddhist majority is waging a brutal campaign of expulsion and oppression against the Rohingya, a Muslim minority group. Imagine finding moderators for all eleven of South Africa’s official languages.

Hiring more humans, if there are even enough of them, won’t solve these problems—nor is it likely to be good for the humans themselves. Sarah Roberts, an information scholar at the University of California, Los Angeles, has interviewed content moderators throughout Silicon Valley and beyond, and she reports that many are traumatized by the experience and work for low wages without benefits. But Zuckerberg’s A.I. solution, which he sees becoming a reality “over a five-to-ten year period,” is equally untenable. It’s like Mark Twain’s Connecticut Yankee, Hank Morgan, fooling the people of Camelot with his technocratic “magic.” But, more crucial, it’s also an expression of techno-fundamentalism, the unshakable belief that one can and must invent the next technology to fix the problem caused by the last technology. Techno-fundamentalism is what has landed us in this trouble. And it’s the wrong way to get us out.

The main selling point of automated content moderation is that it purports to sidestep the two hurdles that thwart humans: scale and subjectivity. For a machine that learns from historical experience—“This is an example of what we want to flag for review; this is not”—scale is an advantage. The more data it consumes, the more accurate its judgments supposedly become. Even mistakes, when identified as mistakes, can refine the process. Computers also like rules, which is why artificial intelligence has seen its greatest successes in highly organized settings, such as chess matches and Go tournaments. If you combine rules and lots of historical data, a computer can even win at “Jeopardy!”—as one did in 2011. At first, the rules must be developed by human programmers, but there is some hope that the machines will refine, revise, and even rewrite the rules over time, accounting for diversity, localism, and changes in values.

This is where the promise of artificial intelligence breaks down. At its heart is an assumption that historical patterns can reliably predict future norms. But the past—even the very recent past—is full of words and ideas that many of us now find repugnant. No system is deft enough to respond to the rapidly changing varieties of cultural expression in a single language, let alone a hundred. Slang is fleeting yet powerful; irony is hard enough for some people to read. If we rely on A.I. to write our rules of conduct, we risk favoring those rules over our own creativity. What’s more, we hand the policing of our discourse over to the people who set the system in motion in the first place, with all their biases and blind spots embedded in the code. Questions about what sorts of expressions are harmful to ourselves or others are difficult. We should not pretend that they will get easier.

What, then, is the purpose of Zuckerberg’s A.I. incantation? To take the cynical view, it offers a convenient way to defer public scrutiny: Facebook is a work in progress, and waiting for the right tools to be developed will take patience. (Once those tools are in place, of course, the company can blame any flubs on flawed algorithms or bad data.) But Zuckerberg isn’t a cynic; he’s a techno-fundamentalist, and that’s an equally unhealthy habit of mind. It creates the impression that technology exists outside, beyond, even above messy human decisions and relations, when the truth is that no such gap exists. Society is technological. Technology is social. Tools, as Marshall McLuhan told us more than fifty years ago, are extensions of ourselves. They amplify and distort our strengths and our flaws. That’s why we must design them with care from the start.

The problem with Facebook is Facebook. It has moved too fast. It has broken too many things. It has become too big to govern, whether by a band of humans or a suite of computers. To chart the way forward, Zuckerberg has few effective tools at his disposal. He should be honest about their limitations—if not for his company’s sake then for ours.


The World’s First Album Composed and Produced by an AI Has Been Unveiled

A music album called IAMAI, which released on August 21st, is the first that’s entirely composed by an artificial intelligence.

A New Kind of Composer

“Break Free” is the first sone released in a new album by Taryn Southern. The song, indeed, the entire album, features an artist known as Amper—but what looks like a typical collaboration between artists is actually much more than that.

Taryn is no stranger to the music and entertainment industry. She is a singer and digital storyteller who has amassed more than 500 million views on YouTube, and she has over 450 thousand subscribers. On the other hand, Amper is making his debut…except he’s (it’s?) not a person.

Amper is an artificially intelligent music composer, producer, and performer. The AI was developed by a team of professional musicians and technology experts, and it’s the the very first AI to compose and produced an entire music album. The album is called I AM AI, and the featured single is set to release on August 21, 2017.

Check out the song “Break Free” in the video below:

As film composer Drew Silverstein, one of Amper’s founders, explained to TechCrunchAmper isn’t meant to act totally on its own, but was designed specifically to work in collaboration with human musicians: “One of our core beliefs as a company is that the future of music is going to be created in the collaboration between humans and AI. We want that collaborative experience to propel the creative process forward.”

That said, the team notes that, contrary to the other songs that have been released by AI composers, the chord structures and instrumentation of “Break Free” are entirely the work of Amper’s AI.

Not Just Music Production

Ultimately, Amper breaks the model followed by today’s music-making AIs. Usually, the original work done by the AI is largely reinterpreted by humans. This means that humans are really doing most of the legwork. As the team notes in their press release, “the process of releasing AI music has involved humans making significant manual changes—including alteration to chords and melodies—to the AI notation.”

That’s not the case with Amper. As previously noted, the chord structures and instrumentation is purely Amper’s; it just works with manual inputs from the human artist when it comes to style and overall rhythm.

And most notably, Amper can make music through machine learning in just seconds. Here’s an example of a song made by Amper, and re-arranged by Taryn.

Yet, while IAMAI may be the first album that’s entirely composed and produced by an AI, it’s not the first time an AI has displayed creativity in music or in other arts.

For example, an AI called Aiva has been taught to compose classical music, like how DeepBach was designed to create music inspired by Baroque artist Johann Sebastian Bach. With this in mind, the album is likely just the first step into a new era…an era in which humans will share artistry (and perhaps even compete creatively) with AI.

Editor’s Note: This article has been updated to clarify what songs were made by Amper and rearranged by Taryn. 

Source: The World’s First Album Composed and Produced by an AI Has Been Unveiled

by Dom Galeon on August 21, 2017 

 Amper Music