From its very inception, the MIT Computational Law Report has prioritized featuring rich media as one of our three modes of content, along with written works and reproducible data and software. Our two primary sources of media content are Idea Flow, a monthly video-based flash-talk and discussion video series, and The law.MIT.edu Podcast audio series. The Podcast is structured in a broadcast format, featuring interviews, creative short segments connecting computational law with everyday life, and updates from our Editor in Chief Bryan Wilson on the Computational Law Report. Idea Flow is structured as an interactive discussion series with a live audience invited from the burgeoning computational law community at MIT and beyond, anchored around short flash-talks presented by thought leaders on key emerging ideas. Helping to bring the media content of this publication forward truly is my fondest contribution as Executive Director.
Know someone who might be interested to join future Idea Flow sessions? We've got a signup form you can share, right here: https://forms.gle/J1qBAxnRP2whTVce9
— Dazza Greenwood
Most Recent Idea Flow: “Artificial.FM”
Date: 11:00 AM - 12:00 PM ET on May 28, 2021
Presenters: Ziv Epstein (MIT), Robert Mahari (Harvard Law, MIT)
Artificial.fm is an experimental platform that explores a new medium: AI radio. Developed by the MIT Media Lab, the platform hosts stations that play songs generated by AI. These AI-generated songs are made in a collaboration between up-and-coming musicians, a deep neural network, and crowdsourced rating labels. In addition to listening to this new kind of music, users can also provide feedback on the generated songs, thus helping the AI learn to generate better music in the future.
In particular, artificial.fm uses OpenAI’s Jukebox, a generative deep neural network trained on 1.2 million songs, for music generation . Jukebox requires as input a “prime” of existing music which it then “improvises” on top of. We solicit such primes from local musicians we contact as part of a collaboration to support artists affected by the pandemic. The outputs of this process will be streamed via the platform, where listeners can provide subjective feedback on the quality of the AI-generated outputs. This crowdsourced feedback will then be used to further adapt the generation process and find the “gems in the rough” (using the algorithm outlined in ). This process involves (atleast) four distinct actors in the production of the outputs: 1) the creators of the music on which Jukebox is trained, 2) the creators of the “primes,”3) the crowd who collectively help find the hidden gems, and 4) the artificial.fm team who curate the process.
The project speculatively interrogates authorship in the entangled, complex and emerging context of AI-generated music. Building on recent work in legal studies  and the behavioral sciences , we explore who gets credit for the content of the platform (see  for a review). How much of the code/weights must be changed before Jukebox’s outputs are no longer recognized by OpenAI’s license? Does crowdsourcing entail “sweat of the brow” in finding high quality outputs? Finally, we explore novel “data cooperative” frameworks for the distributed ownership of such assets.
 Dhariwal, Prafulla, et al. "Jukebox: A generative model for music." arXiv preprint arXiv:2005.00341 (2020).
 Epstein, Ziv, et al. "Interpolating gans to scaffold autotelic creativity." arXiv preprint arXiv:2007.11119 (2020).
 Bridy, Annemarie. "Coding creativity: copyright and the artificially intelligent author." Stan. Tech. L. Rev. (2012): 5.
 Epstein, Ziv, et al. "Who gets credit for AI-generated art?." Iscience 23.9 (2020): 101515.
 Eshraghian, Jason K. "Human ownership of artificial creativity." Nature Machine Intelligence 2.3 (2020): 157-160
Signup to join by Zoom at: https://forms.gle/J1qBAxnRP2whTVce9