AI and Creative Companionship
Grishma Rao and Jenna Fizel reflect on their SXSW talk where they had participants make a book using a suite of custom AI tools.

An iceberg with enormous melting eyes appears on a page, floating in clear blue still water. Let’s make it cuter, we tell the system. We do it over and over, negotiating, watching the eyes get bigger, until we see specular highlights and the hint of a smile appears. Elsewhere, a woodcut polar bear eats enchanted berries and levitates out beyond the upper atmosphere, fulfilling his dreams of wandering through outer space. A clairvoyant network of cats roam a neon-lit Shanghai, standing guard, protecting the city against an alien invasion.

Somewhere there exists a door with the words “Nice Day” emblazoned on it, where anyone who enters can relive their happiest memories. A botanist on Mars is longing to return home, a skateboarding baby walrus wants to build the ultimate skate park, and the last remaining human on earth wakes up in a simulation and teaches the world to dream again.

Some of our favorite covers

These are only a sampling of stories that began to take shape as 50‑odd participants from all over the world gathered with us at SXSW, Austin’s massive film-music-tech festival, to embark upon a morning‑long journey to write and illustrate books together, exploring the frontiers of what it might mean to approach and embrace AI as a creative collaborator rather than just as a productivity tool. People came in with varying degrees of experience and exposure to AI tools: for some this was their very first time using an LLM, while others had been enthusiastically experimenting with their capabilities for a long time.

Some of the stories created hit closer to real reality (talking about someone’s experiences attending SXSW), while others explored the outer bounds of the fantastical. The books spanned many kinds of genres, but all contained a sprinkle of magic in their narratives — literally and figuratively. They featured deeply human themes of identity, belonging, sacrifice, dreams and desire and ambition; holding on and letting go, courage and kindness, existential self‑preservation and exploring what it might mean to head into a world where everything suddenly feels fast‑moving and uncertain.

Protagonists ranged from a coffee bean to a living quill to bracelets and necklaces and polar bears. Each representing a conduit for something that could take you on a relatable emotional journey through a playful suspension of disbelief. The system our participants were working with, a custom suite of tools we designed, were loosely rigged to have structured conversations with their users to help refine a story idea and art aesthetic, and then generate and lay out an illustrated book. After the session, Blurb provided our participants with printed copies of their creations.

Our participants posing with their books

Collectively, the books created by our workshop participants - with AI - felt heartwarming, imaginative, humorous, and even poignant. 

This raises the question: can a story written in collaboration with AI hold meaning in the same way as a story written independently by a person? Does that take away from the authenticity? 

How does it feel to feel like these stories feel like something?

The Frame: Creative Companionship

The objective of this exercise was in part to raise questions. To confront the discomfort that, for many of us, pervades our interactions with AI systems, gnawing underneath the growing sense of awe.

Thinking of AI tools as collaborators or companions may not seem like much of a perspective shift, but it flips an internal switch that has far‑reaching implications. It is a shift from thinking about what they can do, to considering how we can work together. Suddenly questions like, who are they? what are they like? become relevant. We explored how companionship manifested in the collaborative process. 

The process of co‑creating a book with AI begins by firing up a custom GPT we built, designed to conduct a structured conversation that would take ideas of varying levels of definition and help shape them into the level of detail it takes to write a book. In a way, in designing these custom GPTs we were extending our own abilities, building from our experiences of what we considered might make a compelling story: a believable world with unique rules and power structures, a character with a desire and obstacles in the way, and something about the narrative that changes them by the end of the book.

We were testing as we were building, spitting out noir stories about a gang of sentient crocodiles smuggling diamond bracelets in an alt‑Florida where the humidity could warp reality, or a sci‑fi future mythology detailing a heavily ritualized religion centered around flipping USB cables. The initial GPT in turn used actions to speak to a custom web‑based book design tool, which then populated a skeletal book based on the outputs of the conversation. Writers could adjust the genre, mood, physical or emotional attributes of the character, the plot lines and finally the actual sentences.

Our visualization of our writing companion

We used Cursor and Claude along with manually written code to build a custom interface that spoke to Luma, generating an aesthetic and a consistent art style as per the initial conversation, and allowed the user to make adjustments to the illustration style, colors, typography and placements before sending it to be published. 

The origin of this exploration was Danny DeRuntz’s experiment in writing a children's book about pigs, infused with the spirit of a brick that wanted to become something. It was a collaboration that felt more manual, cobbling together a wide set of tools for various purposes. Gingerly approaching GPT with traces of an idea and letting that spill forth into an entire self‑published book.

The specific themes we’ve seen throughout these experiments were empowerment (creating structures that help you better articulate yourself), encouragement (providing reassurance or validating moments when you’re embarking upon a task outside your comfort zone), and extension of ability (filling in the gaps to extend your intention).

So, at SXSW, we were exploring what might happen in a world where a creative companion is always available, what qualities one might look for in a creative collaborator? And when that collaborator is AI — what are the additional variables or degrees of control that might afford you? 

This was important. We wanted to understand “who” it was we were collaborating with, and what it is they bring to the table. As a language generating model, it’s easy to fall into a rhythm of endowing them with humanity, anthropomorphizing them as human creative partners. But it was also interesting to watch that facade crack, and to see the inhuman parts of the LLM shine through.

There are baked-in behaviors that feel human, including unprompted expressions of personality, opinions, and a sense of humor. Glimmers of self‑awareness are even present, as well. A keen eye for when the LLM  perceives it is being tested, or even a defined point of view/way of seeing the world or an emotional tone. While this makes us wonder how close we are to turning the Chinese room thought experiment into a practical one, it’s also important to pay attention to the ways in which these collaborators are machines, that there isn’t a person on the other side of the screen. Their speed and patience, and our ability to wipe their experience and start over, can feel just as magical and uncanny as when they crack a joke. 

But our definitions of what counts as human and what counts as inhuman isn’t a fixed one, or at least what we think of as inhuman can surprise us in its intimacy. Rachel Cusk wrote about this with regards to automated voices: “There has been a great harvest of language and information from life, and it may have become the case that the faux‑human was growing more substantial and more relational than the original, that there was more tenderness to be had from a machine than from one’s fellow man. After all, the mechanized interface was the distillation not of one human but of many.”

We don’t often associate tenderness with machines, but perhaps that’s because of our own myopia. What is that we’re relating to when we find ourselves startled by an LLM?

Who is writing these books, really?

Through our own experimenting, we have found that a constant of working with AI is its inescapable encouragement. The more familiarity you develop with the AI, and the more familiarity it develops with you, the more it can extend you and act as a prosthesis.

The prospect of the blank page is always formidable for even the most seasoned writers. A common point of failure is a lack of structure. Without clear intentions or a starting point, one might feel frustration with the constant barrage of positive reinforcement that AI systems provide when you can’t clarify your idea.

However, creativity thrives within the right set of constraints — being asked the right questions can help articulate a fledgling idea and show you the choices you need to make to give it form.

A goal we had for this workshop was to make the process and outcome go beyond the realm of hypothetical discussion, for each person to experience it personally, and have a tangible artifact by which to remember it. The tools we created provided that initial layer of structure. They were flexibly designed, asking questions based on the level of definition of the idea, combined with expert intention: guidelines on what went into a compelling story, what might create a striking aesthetic.

That context clarifies the learning journey of both writing a story and understanding AI tools. Creating dynamic feedback loops between writer and machine, editing and calibrating components of your story, help break down a daunting task into something more approachable, and give meaning to the confidence the tool has in you. This can extend your own sense of capability.

As participants moved through the process, we asked them to reflect: How did this change your perception of authorship/ownership and creativity? When you look at your finished product, a book you can be proud of — who wrote this book? Our workshop included dramatic readings of excerpts from Roald Dahl’s The Great Automatic Grammatizator, a short story exploring the mechanization of art. A young inventor and aspiring writer sets out to build a fiction‑writing machine, distilling down with mathematical precision the elements of a successful novel into a series of dials: themes and genres and literary styles, and foot‑pedal operated sliders to regulate the intensity and passion. The story explores a world where the publishing industry is slowly replaced with efficient mechanically generated books, and artists outsource their stylistic likenesses to have their work generated in perpetuity.

We should be alert to this. Writers as diverse as Rilke and Ted Chiang have thought about what is distinctly human about the desire to describe their experiences, and how technology might alter what we take to be a very fundamental fact of what it means to be a person. We might worry that we lose something of the metaphysical quality of writing – the capturing of the soul, or an act of bearing witness and of understanding the world and the self. That there is discomfort in the idea that we might become bystanders in our lives. 

But the tools we created are intended to make a state of flow more accessible, to make our capacity to write larger and more generous. They were built to not be too prescriptive — designed to make you think and pull something unexpected out of you, and show you what was yours. But, by their nature, using tools like these should make us think hard about how creativity works.

Upon finishing his illustrated book, one of our participants expressed joy at what these tools might enable. How this meant infinite custom‑generated bedtime stories for his four children, and that they could potentially write their own books.

The Delegation Question

When attempting to consolidate our reflections for this article, Jenna and I asked GPT to summarize some of our thoughts and highlights from this work. When I looked at that summary, I felt a simultaneous sense of awe accompanied by a primal survival instinct/aversion on behalf of humanity kicking in. I found myself impressed but reflexively frowning at the screen, displeased that an AI’s writing was becoming so often indistinguishable from that of a human’s. 

Our workshop came on the heels of a study about how outsourcing cognitively complex tasks to AI can make our brains atrophy and reduce our critical thinking abilities. As reasoning capabilities come into existence and become available for public consumption, there is a gold‑rush to productize them.

Just the day before, GPT 4.5 demonstrated the ability to write a believable piece of metafiction. What had started as more of a speculative experiment was now inching closer and closer to uncomfortable areas of overlap with human creativity, which brought a sense of urgency to the questions we were confronting. This might be a natural stage in human evolution. In the book Technics & Time I, Bernard Stiegler proposes that humanity evolves in conjunction both biologically and technologically — living matter (biological) and inert matter (technological). He describes this entangling as “a pursuit of life by a means other than life.” In the same way that we tend to outsource mental math to calculators, we’re now beginning to delegate more intimate functions of our consciousness to external systems. It’s a little more murky when we tread into the realm of the creative and emotional, but it is another form of delegating aspects of our mind.

Delegation isn’t the only way to frame this. The extended mind thesis (Clarke and Chalmers) argues that objects within the environment can function as part of the cognitive process. The mind and the environment act as a "coupled system" that can be seen as a complete cognitive system of its own. Our smartphones, Google Maps, and the entirety of the internet are an exosystem of the mind that cannot be cleanly separated from traditional notions of an internal self. We disperse little bits of ourselves out into the world – extending our idea of selfhood to incorporate not just our bodies, but all of the things around us we use to navigate our environment. This theory asks us to see ourselves as more than just what’s captured in flesh and blood. 

If we allow ourselves to imagine a world of human and technical kinship, a kind of dual-sided evolution of how we think of each of these terms, we see that atrophy isn’t inevitable, or that we don’t need to delegate the interesting stuff so we can focus on mindless nonsense. It’s to imagine an accelerated growth environment for every seed one might have ever dreamt of planting, allowing everyone to become braver.

In our workshop we saw, firsthand, people who previously believed they might not be able to write and publish a children's book on their own, easily able to bring forth their core personal creative abilities. Having access to AI tools lowers the barrier to having ideas in unfamiliar domains, allowing for a democratization of making that isn’t just cold automation.

The Ballet of Surrender

A theme we keep running into in human-machine interaction and collaboration, especially in the context of creative or emotional endeavors, is control.

A few years ago, I participated in a robot ballet: an experiment in relinquishing physical control to the whims of a heavy exoskeleton—engulfed in a dark techno hellscape, performing for an audience of humans. Light projected onto mist upon the concrete floor of a warehouse‑like auditorium. People were strapped one‑by‑one into metal cages, cables suspended from the ceiling. The robots were given the simple task of embodying grace and elegance, with choreographed movements to perform, carrying the human occupants of their bodies along with them. What we were supposed to do was become tender, light as a feather, integrating with the music. In that moment, our only choice was to let beauty emerge by giving in.

Artificial intelligence in popular culture often appears adversarial. We conjure robot overlords out of species‑level self‑preservation. Some of us revel in the exponential leaps in utility, while others brood over the potential ruthlessness of AGI. The mechanical ballet was an exercise in letting go of resistance: the human became the ghost in the shell, momentarily unable to impose their will upon the world and discovering, in that surrender, an unexpected elegance.

As we confront the realities of working with capabilities that may surpass us, it will require some degree of letting go and new forms of growth. We will soon open ourselves up to a form of shared choreography.