Beginning around 2019 with his first introduction to Google Magenta, an open source research project that explores the role of machine learning in the music-making process, Ethan took us through 10 tools that touched on the different ways companies are using AI to do everything from creating soundscapes, swapping voices, writing lyrics, and composing melodies. Tools like DDSP use neural synthesis to express sound as the function of different instruments -- so playing something on the guitar and having the pitch and dynamics preserved as the guitar "becomes" a clarinet. There are also use cases in which a neural network has been trained on mountains of piano music and can duet with you, responding to the notes you're playing in real time with notes of its own. Another major use case was taking voice inputs and making them sound like something else. A voice becomes a violing; a dog bark becomes a creaking door. We covered Eleven Labs and its text-to-speech technology, and the ability to clone voices, while also looking ahead to tools like GoogleLM's technology, and Suno's diffusion models and their ability to translate text into music. The session ended with both a set of experiments with the sound design tools, and the ethical and legal considerations around things like copyright infringement and artistic practice.