Last Fall, we hosted a research session with Gen-Z students in our Cambridge studio. When we broached the topic of technology, the students were unanimously against anything with the word “AI” in it. Foisted upon them, and with so little say in its development, the students were looking towards forms of interaction that brought them closer to each other and closer to the world around them. So it was a bit of a surprise when the one piece of technology they didn’t revolt against were interactive glasses – an XR headset that quietly leveraged AI for real-time insights.
Why was this surprising? Our students’ mental model of these kinds of technology was isolation-inducing. The glasses, ironically, were perceived to demand less of their direct attention, instead providing more time, space, and capacity to boost genuine human interaction. What we saw in the students’ interest in the glasses was a desire to create a greater feeling of immersion in the world around them, and where they live. To make the invisible connections visible.
So What Did We Do?
The AR glasses, the Snap Spectacles, are hardware that's meant to be worn outside – you are mostly seeing the real world around you. So I thought of how I might create something that can serve as a kind of contextual overlay to the places around Cambridge, allowing a user to walk around the city and notice things and see pieces of information they might not have noticed.
I built a Snap Spectacles lens using knowledge pulled from Atlas Obscura. While walking around the city, the Lens displays different geographical waypoints that you can follow to their origins, or learn more about the history of certain places that show up on the overlay. You might walk by an oddly-shaped home and wonder why it looks the way it does – the Lens overlays tell you it’s the “O’Reilly Spite House,” a house so narrow it looks like the product of an architectural feud. Or maybe you walk past the Harvard Museum of Natural History and see what looks like flowers lying on a table. The Lens overlay tells you that these are, in fact, made of glass, impossibly delicate and detailed, and made in the 1800s.
These are not moments meant to take you away from the world around you, but to help you experience a more vibrant reality within it by calling your attention to the hidden things around you. This, of course, brings up a different host of questions. What happens when we share the same physical spaces, but different virtual ones? How are we using technology to grant people independence to experience the world on their own terms? For example, The Be My Eyes and OpenAI collaboration is an early use of multimodal LLMs; the project utilizes OpenAI's GPT-4 to assist blind & low-vision individuals through interpreting and describing visual data in real time. And how does making these connections change your relationship to your community, to your history, and to the people around you?
These are questions that can only be answered by actually experiencing their weight through actual experiments. What the enthusiasm for glasses over chatbots revealed is that there is still room for the real promise of connectivity embedded in new technology. Terms often couched in language that otherwise feels too overwhelming and anxiety-inducing can be engaged with in a way that feels practical, delightful, even.
So I built a prototype. And, after living here for decades, I learned things about my neighborhood I overlooked every single day. We are enmeshed in a cross-generational web of stories, histories, and lives of which we only barely scratch the surface. Accessing that archive, I felt my world expand: it was clear that my neighbors and city are far more interesting than I usually think. Far from stealing me away from my life, it let me delve deeper into the context and connections that enrich the sensation of living.