How to use AI for your art responsibly

Overview

Tools for using machine learning to generate synthetic media are becoming more and more accessible. How are coordinated groups using synthetically-generated media and other media manipulation tactics to mislead and cause harm in society? How are researchers and digital platforms attempting to detect these media? What are responsible, ethical use cases for using this technology for artists and activists?

The Partnership on AI and Gray Area are teaming up to provide insights on state of the art technology, for both generating and detecting computationally-generated media in context of larger trends in industry and civil society.

Details

Dates: Tuesday, July 28, 2020

Times: 1pm – 2:30pm PST

Cost: This event will be live-streamed online for free. Donations help offset the costs of producing events and allows us to offer them for free to those who cannot give at this time — please consider contributing what you can.

Outline

• Introduction to manipulating media, from art to disinformation
• Basics of synthetic media: the buzz, the threat, what is out there
• Overview of “cheap fake” media manipulation tactics
• Attempts and challenges of industry to detect
• Artist examples of media manipulation
• Recommendations and open questions: What is ethical and responsible?

This 1.5 hour online event will be held via video webinar. We will have an online community chat to share your questions during this event. Please contact us if you have any additional questions at [email protected]

Speakers

Emily Saltz

Emily Saltz is a Research Fellow at Partnership on AI exploring how people understand labels for manipulated media. Prior to joining PAI, Emily was UX Lead for The News Provenance Project at The New York Times R&D Lab researching how contextual information can help audiences understand the authenticity of photojournalism, and presented a documentary VR piece about filter bubbles at the Tech Museum of Innovation in San Jose as a winner of Mozilla’s Reality Redrawn challenge on misinformation. She has a Master’s in Human-Computer Interaction from Carnegie Mellon University, and a B.A. in Linguistics from the University of California, Santa Cruz. She's interested in using research as art and art as research.

Lia Coleman

Lia Coleman is an artist, AI researcher, and educator based in Seattle. Her interests lie in AI for artwork, creative coding for public art, and online education. Lia is an alumni of the School for Poetic Computation (Fall 2019), and Massachusetts Institute of Technology (BSc Computer Science, 2017). She has exhibited in Vancouver, Seattle, and Boston, and presented her research on AI at the NeurIPS Workshop on Creativity in 2019. 

Claire Leibowicz

Claire Leibowicz leads AI and Media Integrity efforts at The Partnership on AI (PAI). Her current work seeks to mitigate the impact of mis/disinformation and effectively communicate media manipulation to the public. Claire holds a BA in Psychology and Computer Science, Phi Beta Kappa and Magna Cum Laude, from Harvard College, and a master’s degree in the Social Science of the Internet from Balliol College, University of Oxford, where she studied as a Clarendon Scholar.




Subscribe to our mailing list