Inside Gray Area & Google’s First DeepDream Art Show
An exhibition about neural networking unpacks the esoteric world of artificial intelligence.
Slideshow Credits: 01 / Mike Tyka; 03 / Mario Klingemann; 04 / Josh Nimoy; 07 / Mike Tyka; 08 / Mike Tyka; 09 / Alexander Mordvintsev; 10 / Alexander Mordvintsev
In essence, they were machine dreams. “Yes, androids do dream of electric sheep,” wrote The Guardian. Soon the story had gone viral, providing the rest of the world an introduction to the burgeoning community of engineers and artists working on computer vision and neural networks, on which DeepDream is based. It spawned hundreds of follow-ups and dozens of image generators, and made neural networks a household term.
This weekend the first art exhibit and auction dedicated to neural networking—curated by Joshua To, a design and UX lead for VR at Google—opened at the San Francisco gallery and arts foundation Gray Area. The idea for the show, according to To, evolved directly from the explosion of online interest in the project. “DeepDream had gone viral and everyone’s experience was seeing the work on their phone or laptop screens,” he told me over email. “We thought it would be powerful to curate a collection of pieces so that people can experience the work printed large, high quality, and framed professionally with gallery lighting.”
Inside the gallery, art from 10 different artists and engineers hang on the walls, chosen to represent the sheer diversity of ways computer vision can be used to make art. Each artist has a unique background, ranging from the VR filmmaker Jessica Brillhart to Josh Nimoy, a computational artist who worked on Tron: Legacy. While it would have been easy to let the bizarre—and quite beautiful—imagery stand alone, To says that explaining the ideas behind neural networking was absolutely crucial. But if you’ve ever tried to explain DeepDream out loud (and you don’t work in AI), you know that’s easier said than done.
What emerged was a classic problem of information design: How to communicate the remarkably complex and cerebral ideas behind neural networks to a public, without using the relatively esoteric language and ideas native to AI—all in a digestible format that visitors could take with them. Led by Gray Area, the exhibition team came up with a brochure filled with metaphor-rich language explaining the basics (“DeepDream is almost like cloud-watching,” writes featured artist Mike Tyka).
But it also included a symbol-based wayfinding system for understanding the tech behind each piece of art. First, it lays out four different techniques—DeepDream, Class Visualization, Style Transfer, Fractal DeepDream—that were on view in the gallery, in dead-simple terms. Then, it assigns each technique a graphic symbol. Beneath each piece inside the gallery, a placard identifies not only the artist and year, but also an icon that corresponds to the technique the artist used.
It’s a semantic wayfinding system designed to help visitors navigate the esoteric world of artificial intelligence.
For example, take this piece by the self-described code artist Mario Klingemann, a sepia-toned tangle of eyes and jawlines. In the gallery, the piece is labeled with an overlapping series of concentric rings. Take a look at the brochure, and you’ll see this symbol refers to a straightforward use of DeepDream, where a trained neural network is fed an image (presumably here, a woman’s face), which is then incrementally changed by the algorithm, the curators explain, creating a feedback loop between the original image and the neural network’s reading of it. “This process gets repeated many thousands of times to create unique imagery,” they write.
According to Gray Area’s Josette Melchor, the full process of developing and organizing the show, auction, and accompanying symposium with Google took a full six months. “It was incredibly important to us that people who attended the event could leave having a solid understanding of neural networks and DeepDream,” To says. “We put in a lot of work to achieve this goal.” The auction of the pieces this weekend raised almost $98,000 to benefit the gallery’s mission of supporting young artists working with technology through scholarships.
Each of the featured pieces represented a collaboration between a machine and a human. Finding a way to explain that complex and very nascent working relationship to the public isn’t just important in the context of the exhibit, it’s important in the context of advancing artificial intelligence in general.