This session is a casual discussion among the Emerging Wave group about OpenAI's rapid product releases. They start by examining the new multi-modal version of real-time voice mode in the OpenAI app, which allows users to utilize their camera while using voice commands. They test to see who has received the update and discuss its potential uses, such as identifying objects or people. The group then moves on to Apple Intelligence, specifically the Image Playground feature, which allows users to create AI-generated images of themselves in different themes (e.g., as a robot). They also touch upon other Apple Intelligence features, such as Siri's integration with ChatGPT. The session further covers OpenAI's Canvas, which now executes Python code within the browser. A significant portion of the discussion revolves around Sora, OpenAI's text-to-video model. Tanya, who has access, demonstrates and creates videos based on prompts from the group. They experiment with different prompts, including a melting candle, a brainstorm session with post-it notes, and a psychedelic flying train. The group also explores Sora's editing features, such as storyboarding, recutting, remixing, blending, and looping. They also discuss downloading and sharing the generated videos.