This VR Exhibit Lets You Connect with the Human Side of War

A pioneering photojournalist hopes VR can restore war photography’s dramatic power to influence and inform us.

A split screen shows Gilad, at left, a reservist in the Israel Defense Forces, and Abu Khal, at right, a member of the Popular Front for the Liberation of Palestine.


Sun streams through a grid of skylights, carving the gallery’s wooden floor into a checkerboard. When I look up, I can see wispy clouds passing overhead. Large photos hang on the gallery walls. They’re pictures of a landscape devastated by war and portraits of men fighting in those wars.

I hear footsteps behind me. I turn around and watch two figures enter the room and take up stations in front of the portraits. They’re the men from the pictures.

An unseen narrator explains that the shorter one, Jean de Dieu, was a child soldier recruited by the Democratic Forces for the Liberation of Rwanda (FDLR). It’s a Hutu group waging war against Rwanda from its base in the eastern part of the Democratic Republic of the Congo. The other, Patient, is a sergeant in the Congolese army, which is allied with Rwanda’s ruling Tutsi ethnic group.

I know they’re both virtual characters, re-created through 3-D scanning and computer graphics. But they’re startlingly realistic—far more lifelike than anything I’ve seen in a game or movie.

As I approach Jean de Dieu, who looks sad and tired, a conversation begins. The narrator asks: Who is your enemy? What is violence for you? What makes your enemy inhuman? Jean answers in halting, vulnerable tones. I listen to his story of being forced into a refugee camp at age 11 and seeing Congolese militia kill his parents, their brains splattering onto him. Of course he’d hate the Tutsi, and everyone aligned with them.

Now the narrator quizzes Patient. He says the army pursues the FDLR because its soldiers rob, rape, and murder Congolese citizens. “He has no human values and can no longer change his mind,” Patient says of his despised FDLR enemy. “He wants to stay in the forest as part of the rebellion like a savage. Only beasts live in the forest.”

Jean de Dieu (left) fled Rwanda as a child and watched as militia in the Democratic Republic of the Congo killed his parents. Patient (right) fights for the Congolese Army.

But Patient and Jean de Dieu also tell the narrator something else: they just want to live in peace with their neighbors and families. And as I walk through three more rooms and meet more combatants—gang members in El Salvador, a reservist in Israel and a Palestinian fighter in Gaza—I hear that shared hope flicker through in answer after answer. These men all have different stories, different traumas, and different allegiances. But their dreams are the same. Abu Khaled, in Gaza, says 23 of his family members have died during the Israeli occupation, but he still hopes for “peace and brotherhood” in the region.

After 40 minutes, I’m guided to a spot on the floor that resembles a Star Trek transporter pad. An assistant helps me remove my Oculus Rift VR headset and backpack, and I’m back on the ground floor of the MIT Museum, where this ambitious virtual-reality exhibit, “The Enemy,” made its North American premiere in the fall of 2017.

The exhibit—or maybe “experience” is a better word—is the creation of the Belgian-Tunisian photojournalist Karim Ben Khelifa. He interviewed and filmed the fighters and then worked with Fox Harrell, a professor of digital media and artificial intelligence at MIT, and French partners Camera Lucida, France Télévisions Nouvelles Ecritures, and Emissive to bring them to life inside the virtual gallery.

Part of what’s groundbreaking about “The Enemy” is the sheer size of the simulation: the museum cleared out a 3,000-square-foot space so that up to 15 Oculus-wearing visitors at a time could roam freely in the virtual world. The fidelity of the characters and their movements is also striking. You can see the stubble on their chins and the tattoos on their arms and torsos. Thanks to eye-tracking sensors, each figure’s gaze is locked onto yours, cementing the illusion that the fighters are speaking directly to you. The technology works well enough to disappear, allowing you to form direct, empathetic connections with Jean, Patient, Abu, and their fellow combatants.

This photograph of Jean de Dieu is one of those used to create his avatar.


Which is exactly what Ben Khelifa wanted. “My interest was, can you look at these people in the eyes?” he told me. “Can they look you in the eyes? And what is happening when two people look at one another in the eyes? There is a connection, whether we want it or not.”

Right now, the “The Enemy” is accessible only to museum visitors, but Ben Khelifa says he wants those trapped in conflict zones, especially young people, to experience it too. If the installation can help people see that every conflict is grounded, to some extent, in stereotypes and misunderstandings, they might come to understand one another better and stop fighting, he believes. It’s a noble goal—but will all future VR producers have such benevolent aims?

Blown away

The idea that VR might be a medium for a new kind of journalism took hold around 2015, when the New York Times released its first VR documentary, “The Displaced,” about three young war refugees. Technically, the pieces produced by the Times’ VR studio are 360° films. Viewers can look in different directions, but otherwise, they watch passively. Sticklers reserve the term “virtual reality” for simulated 3-D environments in which users can move around at will and control objects, as gamers can on platforms such as HTC Vive, PlayStation VR, and Oculus Rift. That’s the type of virtual reality that Ben Khelifa, a freelancer who has covered conflicts in Iraq, Libya, Syria, Israel, Yemen, Somalia, and many other countries, wanted to employ for “The Enemy.”

A virtual-reality re-creation of a fighter, speaking in his own words, might help viewers feel the impact of war more deeply, Ben Khelifa believed. So he went to Israel and Gaza, where he found soldiers willing to be videotaped. While they talked, he scanned them with a Microsoft Kinect and photographed them from multiple angles. He says his experience as a photojournalist helped him get the subjects to open up. “These fighters understand that I’ve been through a lot of fighting too—without holding a gun, but holding my camera,” Ben Khelifa says. “And I think there is—I wouldn’t call it a brotherhood, but an understanding that we both know what war is.”

In April 2015, at New York’s Tribeca Film Festival, Ben Khelifa showed a prototype of “The Enemy,” featuring only Abu Khaled and an Israeli soldier named Gilad. “People were just blown away by the realism of the fighters,” he says. But these early figures didn’t walk, turn their heads, or react to users. “From there, what I’ve been realizing is, the more the fighters are modified to recognize your presence, the more you recognize the presence of the fighter,” he says. “You spend less time wondering if he’s real or not. And you get to listen.”

Gilad, a reservist in the Israel Defense Forces, is filmed for the creation of his avatar as it will appear in “The Enemy.”


A few years earlier Ben Khelifa had met MIT’s Fox Harrell, whose book Phantasmal Media explores how creators of VR and other computational media can build experiences that mutate depending on the user’s actions. Harrell says he’s fascinated by the narrative techniques of the 1950 Kurosawa film Rashomon, which retells the story of a brutal rape and murder from multiple perspectives. “I’ve been interested in how you can use algorithmic processes in AI to trigger these kinds of effects,” he says.

For “The Enemy,” Harrell helped Ben Khelifa and his team of developers in France build a system that surveys visitors before the experience and then monitors them on camera and via the Oculus headset as they interact with each fighter. Visitors’ responses determine the order in which they experience the three conflicts, the message they receive in the final gallery, and even the weather visible through the skylights.

John Durant, the director of the MIT Museum, says “The Enemy” took the museum into untested territory, both technologically and politically. “It was very appealing, because a lot of us talk about the ways in which technology may or may not contribute to addressing certain kinds of social and political issues, and sometimes people talk about it more than actually experiencing it and trying it,” he says.

The poignant stories told by Amilcar and Jorge, members of two rival gangs in San Salvador, give that section of the exhibit a sticking power that a photo essay just wouldn’t have, Durant says. “Most of the people who are likely to visit this museum don’t have the experience of growing up as members of a gang where a kind of tribal loyalty is perhaps the most fundamental thing you know,” he says. “So it takes some effort, honestly, to try and think about what the world might be like from that point of view. I think ‘The Enemy,’ to me, made it much easier.”

Amilcar Vladimir (left) and Jorge Alberto (right) are members of warring gangs in El Salvador.


Visitors to the museum report similar revelations. “I’m from Colombia … I’ve lived close to war,” one visitor wrote in the guest book. “Forgiveness is gonna be always the hardest part. For forgiveness to appear, there’s gotta be compassion, and that is what ‘The Enemy’ brought me. Thank you.”


VR has, in fact, begun to compete with old-fashioned photojournalism and TV news. VR producers have been flocking to Southeast Asia lately to document the plight of the Rohingya, a Muslim-majority ethnic group under assault in Buddhist-majority Myanmar. A refugee featured in a searing Al Jazeera VR film recounted how security forces in Myanmar had killed her husband and raped her. An Emmy-nominated VR film shot inside a Rohingya confinement camp by the anti-atrocity group the Nexus Fund showed prisoners languishing with little food or medical care. “I can’t put everybody on a plane and take them to Myanmar, but I know that if I could and they could see this in person, there’s nothing they wouldn’t do to help,” Nexus Fund executive director Sally Smith told CNN.

Jorge Alberto’s hand bears gang-related tattoos.


But if VR is an empathy machine, where will all that empathy be directed in the future? Here in the United States, meddlers have hijacked Google, YouTube, Facebook, and Twitter to generate outrage and spread falsehoods, with political consequences we are only beginning to understand. VR’s immersiveness and realism pull even more directly on our heartstrings. There’s nothing to stop Buddhist extremists in Myanmar, for instance, from making VR films designed to further inflame passions against the Rohingya. “Am I scared by it? Yeah,” Ben Khelifa says. “If you can create empathy, you can brainwash people too.”

In “The Enemy,” the VR storytelling is even-handed to a fault. In fact, if the piece has a limitation, it’s that it refuses to judge the merits of each fighter’s cause. But that limitation is also a strength. The parallel questions put to each combatant allow the visitor to construct “this kind of model of what’s the same and what’s different” for each fighter, Harrell explains. “And that can be some impetus to thinking beyond the preconceptions you had of the conflict.”

Without this kind of commitment to fairness and factuality, VR could easily devolve into a propaganda tool. But that’s true of all journalism. We’re fortunate that a creator with Ben Khelifa’s vision and conscience is showing the way.

Wade Roush is a technology journalist and the producer and host of Soonish, a podcast about technology and the future.

“The Enemy” was produced by Camera Lucida, France Télévisions, the National Film Board of Canada, Emissive, and Dpt, and was staged at the MIT Museum in late 2017. It will continue its North American tour in Montreal and other Canadian cities. For tour dates visit

The World’s First Album Composed and Produced by an AI Has Been Unveiled

A music album called IAMAI, which released on August 21st, is the first that’s entirely composed by an artificial intelligence.

A New Kind of Composer

“Break Free” is the first sone released in a new album by Taryn Southern. The song, indeed, the entire album, features an artist known as Amper—but what looks like a typical collaboration between artists is actually much more than that.

Taryn is no stranger to the music and entertainment industry. She is a singer and digital storyteller who has amassed more than 500 million views on YouTube, and she has over 450 thousand subscribers. On the other hand, Amper is making his debut…except he’s (it’s?) not a person.

Amper is an artificially intelligent music composer, producer, and performer. The AI was developed by a team of professional musicians and technology experts, and it’s the the very first AI to compose and produced an entire music album. The album is called I AM AI, and the featured single is set to release on August 21, 2017.

Check out the song “Break Free” in the video below:

As film composer Drew Silverstein, one of Amper’s founders, explained to TechCrunchAmper isn’t meant to act totally on its own, but was designed specifically to work in collaboration with human musicians: “One of our core beliefs as a company is that the future of music is going to be created in the collaboration between humans and AI. We want that collaborative experience to propel the creative process forward.”

That said, the team notes that, contrary to the other songs that have been released by AI composers, the chord structures and instrumentation of “Break Free” are entirely the work of Amper’s AI.

Not Just Music Production

Ultimately, Amper breaks the model followed by today’s music-making AIs. Usually, the original work done by the AI is largely reinterpreted by humans. This means that humans are really doing most of the legwork. As the team notes in their press release, “the process of releasing AI music has involved humans making significant manual changes—including alteration to chords and melodies—to the AI notation.”

That’s not the case with Amper. As previously noted, the chord structures and instrumentation is purely Amper’s; it just works with manual inputs from the human artist when it comes to style and overall rhythm.

And most notably, Amper can make music through machine learning in just seconds. Here’s an example of a song made by Amper, and re-arranged by Taryn.

Yet, while IAMAI may be the first album that’s entirely composed and produced by an AI, it’s not the first time an AI has displayed creativity in music or in other arts.

For example, an AI called Aiva has been taught to compose classical music, like how DeepBach was designed to create music inspired by Baroque artist Johann Sebastian Bach. With this in mind, the album is likely just the first step into a new era…an era in which humans will share artistry (and perhaps even compete creatively) with AI.

Editor’s Note: This article has been updated to clarify what songs were made by Amper and rearranged by Taryn. 

Source: The World’s First Album Composed and Produced by an AI Has Been Unveiled

by Dom Galeon on August 21, 2017 

 Amper Music


DNA could store all of the world’s data in one room | Science | AAAS

DNA could store all of the world’s data in one room

Humanity has a data storage problem: More data were created in the past 2 years than in all of preceding history. And that torrent of information may soon outstrip the ability of hard drives to capture it. Now, researchers report that they’ve come up with a new way to encode digital data in DNA to create the highest-density large-scale data storage scheme ever invented. Capable of storing 215 petabytes (215 million gigabytes) in a single gram of DNA, the system could, in principle, store every bit of datum ever recorded by humans in a container about the size and weight of a couple of pickup trucks. But whether the technology takes off may depend on its cost.

DNA has many advantages for storing digital data. It’s ultracompact, and it can last hundreds of thousands of years if kept in a cool, dry place. And as long as human societies are reading and writing DNA, they will be able to decode it. “DNA won’t degrade over time like cassette tapes and CDs, and it won’t become obsolete,” says Yaniv Erlich, a computer scientist at Columbia University. And unlike other high-density approaches, such as manipulating individual atoms on a surface, new technologies can write and read large amounts of DNA at a time, allowing it to be scaled up.

Scientists have been storing digital data in DNA since 2012. That was when Harvard University geneticists George Church, Sri Kosuri, and colleagues encoded a 52,000-word book in thousands of snippets of DNA, using strands of DNA’s four-letter alphabet of A, G, T, and C to encode the 0s and 1s of the digitized file. Their particular encoding scheme was relatively inefficient, however, and could store only 1.28 petabytes per gram of DNA. Other approaches have done better. But none has been able to store more than half of what researchers think DNA can actually handle, about 1.8 bits of data per nucleotide of DNA. (The number isn’t 2 bits because of rare, but inevitable, DNA writing and reading errors.)

Erlich thought he could get closer to that limit. So he and Dina Zielinski, an associate scientist at the New York Genome Center, looked at the algorithms that were being used to encode and decode the data. They started with six files, including a full computer operating system, a computer virus, an 1895 French film called Arrival of a Train at La Ciotat, and a 1948 study by information theorist Claude Shannon. They first converted the files into binary strings of 1s and 0s, compressed them into one master file, and then split the data into short strings of binary code. They devised an algorithm called a DNA fountain, which randomly packaged the strings into so-called droplets, to which they added extra tags to help reassemble them in the proper order later. In all, the researchers generated a digital list of 72,000 DNA strands, each 200 bases long.

They sent these as text files to Twist Bioscience, a San Francisco, California–based startup, which then synthesized the DNA strands. Two weeks later, Erlich and Zielinski received in the mail a vial with a speck of DNA encoding their files. To decode them, the pair used modern DNA sequencing technology. The sequences were fed into a computer, which translated the genetic code back into binary and used the tags to reassemble the six original files. The approach worked so well that the new files contained no errors, they report today in Science. They were also able to make a virtually unlimited number of error-free copies of their files through polymerase chain reaction, a standard DNA copying technique. What’s more, Erlich says, they were able to encode 1.6 bits of data per nucleotide, 60% better than any group had done before and 85% the theoretical limit.

“I love the work,” says Kosuri, who is now a biochemist at the University of California, Los Angeles. “I think this is essentially the definitive study that shows you can [store data in DNA] at scale.”

However, Kosuri and Erlich note the new approach isn’t ready for large-scale use yet. It cost $7000 to synthesize the 2 megabytes of data in the files, and another $2000 to read it. The cost is likely to come down over time, but it still has a long ways to go, Erlich says. And compared with other forms of data storage, writing and reading to DNA is relatively slow. So the new approach isn’t likely to fly if data are needed instantly, but it would be better suited for archival applications. Then again, who knows? Perhaps those giant Facebook and Amazon data centers will one day be replaced by a couple of pickup trucks of DNA.

Posted in:


Breaking the Glass Ceiling with Dame Stephanie Shirley aka “Steve”

Digital and Entrepreneurial Pioneer/Kindertransport Survivor

If you have ever wondered if you could make a difference in the lives of others, please take a few moments to listen to the story of Dame Stephanie Shirley.  The story of how she was able to build a software company almost entirely with women in the 1960’s is compelling.
Read and learn more about the Dame Stephanie Shirley at these sites.




Explore the World With GeoGuessr


GeoGuessr is an engaging way to explore the world and test your geographic wizardry in a game designed by Anton Wallén of Sweeden.  GeoGuessr, through the wonder of Google Street View maps, plops players down in unknown locations making you guess where you are in the world by visual clues.   You can choose between global, country or city maps which is quite nice.  It’s fun and free!

Give it a go.  Or better yet, challenge a partner to play with you.




Imagine Discovering That Your Teaching Assistant Really Is a Robot

Featured Image: IBM’s Watson Helped Design Karolina Kurkova’s Light-Up Dress for the Met Gala.  Karolina Kurkova attends the “Manus x Machina: Fashion In An Age Of Technology” Costume Institute Gala at Metropolitan Museum of Art.  Getty Images

IBM’s Artificial Intelligence (AI) product Watson, teamed up with Georgia Institute of Technology to experiment using Watson as a TA for an online course.  “Jill” Watson was able to deftly handle most questions, stimulate weekly discussions, and fool most students, who never guessed that they weren’t communicating with a real person.

Last year, a team of Georgia Tech researchers began creating Ms. Watson by poring through nearly 40,000 postings on a discussion forum known as “Piazza” and training her to answer related questions based on prior responses. By late March, she began posting responses live.

By Melissa Korn | Wall Street Journal
Read Full Article Here

Imagine Science Films

Imagine Science Films is a 501(c)(3) non-profit organization in existence since 2008 committed to promoting a high-level dialogue between scientists and filmmakers.

Their mission is to bridge the gap between art and science through film, thereby transforming the way science is communicated to the public and encouraging collaboration across disciplines.

Together, scientists, who dedicate their lives to studying the world in which we live, and filmmakers, who interpret and expose this knowledge, can make science accessible and stimulating to the broadest possible audience. Imagine Science Films is committed to drawing attention to the sciences, whether it is through art or our community outreach efforts.

Read more about Imagine Science Films here:

INFLUX Public Art Project in TEMPE, AZ

INFLUX Public Art Project in TEMPE, AZ

I’d like to announce that my ‪#‎influxaz‬ project is up -All Credits given to Casey Farina. on ‪#‎HaydenMill‬ in ‪#‎Tempe‬.‪#‎publicart‬ ‪#‎projectionmapping‬ ‪#‎raspberrypi‬ ‪#‎influxcycle6‬ ‪#‎generativeart‬‪ #‎CityofTempe‬ ‪#‎downtowntempe‬

Cascade.Erode.Construct. is a video installation that abstractly explores the history of the iconic Hayden Flour Mill. The Mill’s proximity to water (the Salt River) is an integral part of its identity as a Tempe landmark. The movement and erosive power of water form the fundamental structure of the animation from which new forms are constructed. The visual artifacts that remain on the north wall of the Mill are isolated and reinvigorated by the projected light. The animation was created by using a digital image of the wall as the input for a variety of algorithmic processes. The installation repeats every ten minutes between 8:00 PM and 1:00 AM on the north wall of the Hayden Mill. Casey’s research was facilitated by John Southard and E. Hunter Hansen in the Tempe Historic Preservation Office and Jared Smith at the Tempe History Museum.

This project was funded through the City of Tempe Municipal Arts Fund with the support of the Tempe Municipal Arts Commission.


Chip Thomas: Activist, Artist, Doctor

Chip Thomas

Activist, Artist, Doctor


Not all artists are activists, but count Chip Thomas in. 

Embracing the relationship between art and action, Chip Thomas is busy,

spreading peace, love + awareness through paste + paint all over the world!

 fight like a woman

Photo is Courtesy of Chip Thomas, All Rights Reserved

Speaking @ GCC on

Friday, March 25th

12 pm to 1:30pm in the Student Union


Involved with multiple organizations highlighting grass roots initiatives, Chip Thomas is making a difference in unique ways.


One of the non-profits Chip Thomas supports is Honor the Treaties.  They are an organization dedicated to amplifying the voices of Indigenous communities through art and advocacy by funding collaborations between Native artists and Native advocacy groups so that their messages can reach a wider audience.  Chip Thomas is one of several high profile allies including John Densmore, Daryl Hannah, and Peter Yarrow.  Read more here.


Chip Thomas’ Just Seeds bio includes an impressive list of interests including Anti-War, Culture & Media, Education, Environment & Climate, Global Solidarity, Health, Indigenous Resistance, Inspiration, Police & Prisons, Racial Justice, and Social Movement. With members working across North America, Just Seeds believes in the transformative power of personal expression in concert with collective action.  Read more here.


Begun by Chip Thomas in 2012, the Painted Desert Project connects public artists with communities through mural opportunities on the Navajo Nation.  In an effort to boost tourism on the reservation, to supplement the incomes of families with roadside stands, and to nurture the creative talent of local youth, Chip Thomas invited a few world-renowned street artists to come to the Navajo Nation to paint murals in 2012 and has continued doing so as funding allows.  Click here to see a map of mural sites across northern Arizona.  Map of Mural Sites