On Lo-Fi Imagery and Documentation: Audio Visualist, Stephanie Sherriff
Stephanie Sherriff is one of those artists that knows what she wants. Somehow her work continues to evolve while remaining uniquely styled to her aesthetic. This is a rare trait amongst experimental media-based artists. Before her last set of visual performances at the Bunker, she revamped her performance system, built some new FX, and shot some new footage. Needless to say, those of us that had seen Stephanie perform previously were eager with anticipation to see her latest material. Naturally, Stephanie pulled out all of the stops. She showcased an extensive array of her repertoire and amongst it was her unique rendition of pixel-degradation of her signature dream-like imagery.
Stephanie took some time to explain her new OpenGL-based performance system and her experience using Max/MSP/Jitter in the part III of the Toolmaking for a Performance Context series. Check out Part I where Jono Brandel discusses his new performance software, Neuronal Synchrony, and Part II where I discuss interface design and transitioning from openFrameworks to Cinder with Reza Ali.
Cullen Miller: You’ve been involved with GAFFTA for some time now and have gone through many phases of your own work throughout that time. Can you explain some of your current obsessions and practices that you have been applying?
Stephanie Sherriff: I started doing visuals with Gray Area in 2010. At the time I was interested developing a system for audio/visual performances using field recordings + moving images and had been fooling around with Max/MSP/Jitter for a year or so. Sound was/is a really important part of my process and inspiration. When I started VJing it kind of felt separate from “my work” because I was so attached to the experience of the sonic element. Performing visuals mainly started as a way to continue developing ideas for a larger audio/visual system.
Since then I’ve really grown to love VJing. I think it’s really neat how every artist has their own unique style and method for generating and controlling imagery in real-time. I’ve grown to really appreciate the temporal nature of the event, the experience, and the dynamic nature of every space, audience, and projection surface.
CM: You use Max/MSP/Jitter for your performances and have recently been building a new environment based on OpenGL. Can you tell us a bit about why you’ve transitioned into using OpenGL versus what you were using previously?
SS: Until recently, with the release of Max 6, OpenGL had not yet been incorporated into the Max environment. When I first began building the foundation of my interface, I was working with Max 5, and basic Jitter objects were the only way to incorporate/manipulate video in a patch. Jitter objects can be very taxing on the speed of your machine because they operate using the Central Processing Unit [CPU] instead of communicating directly with the graphics card. If you’re working with video, it doesn’t take much to get the computer chugging. Now that OpenGL has been incorporated into Max there is less of a need to depend on the CPU for graphics.
OpenGL is a preexisting language that is structured to communicate directly with the Graphics Processing Unit [GPU] in a computer, and the GPU is designed specifically to buffer videos and build images rapidly. Switching to OpenGL makes a lot of sense with my process because I’m using video. For anyone interested using OpenGL within the Max environment, it definitely requires learning a whole new process of moving data around [even if you’re familiar with OpenGL] and most likely mean a restructuring of the foundation of any preexisting patches.
CM: How has it benefited your performances?
SS: I haven’t actually switched over for a performance just yet. As I mentioned, it’s a whole new set of rules and using it has basically required me to re-program my whole system and learn how to use a new language [ Thank you, Jeff Lubow! ].
Once the new system is finished, I’ll be able to start experimenting with a fourth channel of video and some shaders.
From what I can tell shaders are a huge reason to switch to OpenGL in Max. They seem to open up a lot of space for experimentation and help the imagery generated in Max feel less ‘Maxy’ [like being able to see the software in the work]. I have to say… Cycling ’74 is doing some pretty amazing stuff. Max 6 is definitely an upgrade…and Max for Live! Are you kidding me?!?!? Mind-blasting!!
CM: The output of your performances are often lo-fi. When you are documenting the work, are you capturing it this way? Is this a processing technique, or is it both?
SS: It’s both.
I like to use super cheap, tiny cameras to record imagery and am interested in how digital lo-fi cameras record light and motion. A lot of times imagery ends up looking kind of animated because the frame rate is so low. There are also these really cool instances of magenta light flares that happen if the air is especially humid and hot. It’s always a little unpredictable. I also prefer to use small cameras because they’re less invasive and I can carry them around without stressing out about breaking or losing them; I can tie them to things. I like their rawness.
I’m also using the software to add texture to the performance and can adjust a number of variables like brightness, contrast, saturation, etc. I like to intentionally exaggerate the already pixelated quality moving imagery.
CM: Can you explain your documentation process and what’s happening to the signal in your performance system?
SS: There are a lot of elements. First, there’s recording the footage, curating + editing the footage. Then all of the video goes into Max/MSP/Jitter… where I am able to mix and manipulate different channels of video into one main video out. I also use midi to control a set of variables in each video channel, and recently bought one of the those cheap, little KORG Midi controllers. Overall the functionality of the system is really very simple. Most of the distortion is done in the camera. During a performance I’m mainly experimenting with layering images and manipulating color and texture. It’s a lot of fun.
CM: You are also a sound artist. How does your experience with sound relate to your visual work? And vice versa?
SS: Sound was what originally pointed me in the direction of visual performance. The two go hand-in-hand, as far as I’m concerned. I’m looking forward to some epic audio/visual collaborations in the near future!
Stephanie Sherriff is an interdisciplinary sculptor, performer, and media artist currently residing in Oakland, California. Her most recent work explores both artistic and scientific concepts through experiments with electricity, water, found objects, low-fi digital video camera footage, sound, and software.