fbpx

Exploring the Future of Music with Spencer Salazar

Dorothy Santos, GAFFTA Blogger

Advancements in technology are transforming music into an incredibly interactive experience for the listener. It’s not only about listening, but bringing a level of tangibility to sound and audio. The increasing use of smartphones, tablets, and other mobile devices is providing a new way to learn and engage with the world, and artists, musicians, and creative technologists are now developing methods to involve the user in creating their own unique experience.

For instance, the way we read has changed drastically. American publishing house McSweeney’s incorporated icons for interactive artworks into their mobile applications as a way for the reader to engage with the material while providing exposure to new media artists. Within the music industry, we see the same innovation but with much more interactivity in mind. As touch screen technology becomes increasingly prevalent in how we obtain information, creatives must grapple with how the technology affects our individual and collective experiences. From the internet to mobile devices, music is one of the first commodities that concerns the user. It is no longer an auditory escape but a multi-sensory experience, which forces artists and musicians to look at music production in radically different ways.

Upcoming GAFFTA course, Music and Mobile Computing for iOS, taught by instructors Spencer Salazar and Mark Cerqueira, will not only assist developers in learning new skills but will help shape nascent ideas into potentially sustainable projects. Despite Salazar’s busy schedule, he was able to answer questions regarding the field of mobile computing, how the course came to fruition, and where he believes the field is headed. He also shared some projects currently in the works at Smule.

Q & A with Music and Mobile Computing for iOS Instructor Spencer Salazar

Dorothy Santos (DS): Can you provide some background on processing and design in iOS as a creative tool? In layman’s terms, how would you describe music and mobile computing in iOS? How is it used and by whom, typically?

Spencer Salazar (SS): The explosive popularity of smartphones in the past 5 years has led to a proliferation of small computers in apparently everyone’s pocket or purse, each persistently connected to the internet, aware of its geographic location and spatial orientation, always-on, and capable of extensive audio/visual processing. There are many similarities between traditional desktop computing and the new mobile model; our course explores how the distinguishing qualities of mobile computing can be leveraged for new/interesting musical experiences, using iOS as the specific programming environment for this exploration.

Mark and I both come from Smule, where these technologies power apps like Ocarina, a combination of instrument, music education tool, and social-music experience. I’m a PhD student at CCRMA (Stanford’s Computer Music department), where groups like the Mobile Phone Orchestra (“MoPhO”) use iOS to realize musical compositions in a performance context. Beyond that, forward thinking musicians such as Björk and Brian Eno have embraced mobile technologies, the former releasing her latest album in the form of an iPad application. So, in our experience there’s a combination of software developers and musicians who see a lot of value in these tools.

DS: What do you hope students will get out of the course?

SS: The curriculum is about 50/50 audio and physical interaction. The focus is partially on what kind of musical experiences make sense given the hardware interface and how to implement in a way that will actually work reasonably well with limited computing resources. We hope students will produce some sort of interesting app/experience/instrument for their iPhone or iPad. After 2 weeks it’s more likely this will be in proof-of-concept form rather than something ready to ship to the App Store, but from there, we also hope that they will have the tools to further develop that app and create new ones.

DS: Where do you see this subject matter or field going?

SS: Hmm, its a tough question because we’ve really just scratched the surface of what is possible with the current way of thinking about it. But we think there will be a lot more software that really takes advantage of location-awareness, the physicality of the device itself, and the degree to which one’s phone is part of one’s identity.

From a software engineering perspective, there is a lot of room for growth. At the moment, to put together a solid network-enabled app you need to have a handle on at least three different programming frameworks. In an ideal world you would only need one, so that’s a pretty glaring deficiency in the toolset.

DS: What’s the most exciting thing you’ve seen done with these tools?

SS: I’m not sure what the *most* exciting thing is but its pretty cool to see mainstream artists like Björk embracing this type of technology (e.g. Biophilia). It’s also cool to see things like this:

I don’t watch Glee or even have TV, but here 3000 complete strangers from around the world are singing together (in support of those hit by the Japanese tsunami in 2011). This is something that just wasn’t possible until very recently, thanks to mobile audio technology.

DS: What new projects are you working on?

SS: I’m working on a mobile application where users can leave audio “traces” throughout their world, and other people can tune in to the traces that have been left around them. I’m also trying to develop that “1 framework” I mentioned above.



 
Music and Mobile Computing for iOS is being taught at GAFFTA Tuesdays and Thursdays July 3 – 12. Learn more about the class and the instructors, and register now on the GAFFTA class page.

Check out a few of Spencer’s latest projects below.

Here’s the latest from Smule:

See Borderlands, an app developed by Chris Carlson during a one term course at Stanford’s CCRMA, which inspired this iOS for Music workshop: