Google’s Artificial Brain Is Pumping Out Trippy—And Pricey—Art
On Friday Evening, inside an old-movie-house-cum-art-gallery at the heart of San Francisco’s Mission district, Google graphics guru Blaise Agüera y Arcas delivered a speech to an audience of about eight hundred geek hipsters.
He spoke alongside a series of images projected onto the wall that once held a movie screen, and at one point, he showed off a nearly 500-year-old double portrait by German Renaissance painter Hans Holbein. The portrait includes a strangely distorted image of a human skull, and as Agüera y Arcas explained, it’s unlikely that Holbein painted this by hand. He almost certainly used mirrors or lenses to project the image of a skull onto a canvas before tracing its outline. “He was using state-of-the-art technologies,” Agüera y Arcas told his audience.
His point was that we’ve been using technology to create art for centuries—that the present isn’t all that different from the past. It was his way of introducing the gallery’s latest exhibit, in which every work is the product of artificial neural networks—networks of computer hardware and software that approximate the web of neurons in the human brain. Last year, researchers at Google created a new kind of art using neural nets, and this weekend, the tech giant put this machine-generated imagery on display in a two-day exhibit that raised roughly $84,000 for the Gray Area Foundation for the Arts, a San Francisco nonprofit devoted to the confluence of art and tech.
The night was one of those uniquely hip yet wonderfully geeky Silicon Valley scenes. “Look! There’s Clay Bavor, the head of Google’s suddenly enormous virtual reality project.” “There’s TechCrunch’s Josh Constine!” “And there’s MG Siegler, who used to write for TechCrunch but now, um, goes to neural network art shows. Or, at least, I think that’s him.” But it was also a night to reflect on the rapid and unceasing rise of artificial intelligence. Technology has now reached the point where neural networks are not only driving the Google search engine, but spitting out art for which some people will pay serious money.
For Agüera y Arcas, this is just a natural progression—part of the traditional that extends through Han Holbein and back to, well, the first art ever produced. For others, it’s a rather exciting novelty. “This is the first time I’ve seen art that works more like a science project,” said Alexander Lloyd, a regular patron of the Gray Area Foundation, after he spent a few thousand dollars on one piece of neural network art. But Friday’s show was also a reminder that we’re careening towards a new world where machines are more autonomous than they have ever been, where they do even more of the work, where they can transport us to places beyond even our own analog imaginations.
Deep (Learning) Dreams
Today, inside big online services like Google and Facebook and Twitter, neural networks automatically identify photos, recognize commands spoken in smartphones, and translate conversations from one language to another. If you feed enough photos of your uncle to a neural net, it can learn to recognize your uncle. That’s how Facebook identifies faces in all those photos you upload. Now, with an art “generator” it calls DeepDream, Google has turned these neural nets inside out. They’re not recognizing images. They’re creating them.
Google calls this “Inceptionism,” a nod to the 2010 Leonardo DiCaprio movie, Inception, that imagines a technology capable of inserting us into each other’s dreams. But that may not be the best analogy. What this tech is really doing is showing us the dreams of a machine.
To peer into the brain of DeepDream, you start by feeding it a photo or some other image. The neural net looks for familiar patterns in the image. It enhances those patterns. And then it repeat the process with the same image. “This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird,” Google said in a blog post when it first unveiled this project. “This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.”
The result is both fascinating and a little disturbing. If you feed a photo of yourself into the neural net and it finds something that kinda looks like a dog in the lines of your faces, it turns that part of your face into a dog. “It’s almost like the neural net is hallucinating,” says Steven Hansen, who recently worked as an intern at Google’s DeepMind AI lab in London. “It sees dogs everywhere!” Or, if you feed the neural net an image of random noise, it may produce a tree or a tower or a whole city of towers. In that same noise, it might find the faint images of a pig and a snail, creating a rather frightening new creature by combining the two. Think: machines on LSD.
Virtually Art
Created by a Google engineer named Alexander Mordvintsev, this technique began as a way of better understanding the way neural networks behave. Though neural nets are enormously powerful, they’re still a bit of a mystery. We can’t completely grasp what goes on inside this web of hardware and software. Mordvintsev and others are still reaching for this understanding. But in the meantime, another Google engineer, Mike Tyka, seized on the technique as a way of creating art. Tyka works with neural networks at Google, but he’s also a sculptor. He saw the technique as a way of combining his two interests.
Artists like Tyka choose the images that get fed into the neural nets. And they can tune the neural nets to behave in certain ways. They may even re-train them to recognize new patterns, unleashing seemingly limitless possibilities. Some of this artwork looks quite similar, with their spirals and dogs and trees. But many pieces venture in their own directions, across bleaker and more mechanical landscapes.
Four of Tyka’s neural net creations were auctioned off on Friday.Castles in the Sky With Diamonds. Ground Still State of God’s Original Brigade. Carboniferous Fantasy. And The Babylon of the Blue Sun(see above). Across the gallery, the names matched the strange visual splendor of the images. And that’s not surprising. Joshua To, who curated the show, says that many of the titles were also chosen by neural networks, feeding off the images themselves. An NYU grad student named Ross Goodwin used this technique to generate the titles for Tyka’s work.
For Hansen, these auto-generated works aren’t a big leap from what we’ve had before. “I feels like an advanced version of PhotoShop,” he says. But at the very least, DeepDream serves a symbol for a much bigger change. Machines are doing so much more on their own. You see this, most notably, in the Google Search engine, where the rise of neural networks means that humans play less of a role—or, at least, humans are farther removed from the engine’s final decisions. It’s not just following rules that human engineers tell it to follow.
And that gap will only grow, not just in Google’s search engine but across so many other services and technologies. On Friday, at the edges of the gallery, Google invited visitors to strap on its Cardboard virtual reality headsets to venture even deeper into DeepDream. For now, Cardboard stops a little short of a true alternate universe. But the technology is rapidly improving. It’s no stretch to predict that on day, machines will create these virtual worlds largely on their own. Clay Bavor, Google’s head of VR, wasn’t just a guest as the exhibit. He was a sponsor of this weekend’s show and one of driving forces behind it. Joshua To also works on VR at Google. Yes, Hans Holbien used technology to make his art. But this is going somewhere else entirely.
Update: This story has been updated with the latest attendance and auction numbers from Google
Article by: Cade Metz
Image: GCHQ by Memo Akten