Augmented Lookism
What a growing tech trend can teach us about emerging selfhood

There’s a tension so tight it is threatening to break, and it is between the way we understand ourselves in digital and the way we understand ourselves in physical spaces. The fluidity of our personal identities is amplified online: we are many things to many people. The same is true for our lives in the physical world, but that fluidity is restricted by the laws of physics in a way that it is not true when we are online. 

The “Generative” part of “Generative AI” only deepens this division. One sees it in avatars, “profile pictures in the style of… [insert artist],” identity crises driven by long conversations with LLMs. We can be anyone or anything we want, but does that mean we lose all solidity of a sense of who we are? Or do established norms for certain online identities begin to backfire and cause us to be even more prescriptive about what is and is not an acceptable identity? Our perfectly curated avatars versus our decidedly imperfect lives?

I want to explore these questions and their ramifications for the human side of technological existentialism through the lens of VTubers: an incredibly large subculture of online cosplaying currently most popular in Japan and China, but slowly taking over the world, threatening to become a multi-billion-dollar market and major cultural force in its own right. 

The popularity of VTubers indicates that who we are and how we express ourselves is evolving, and is only intensified by recent technological advances.

The Digital Mask

VTubers began in Japan as a playful subculture: people putting on animated “skins” to sing, chat, and game without revealing their faces. It was cosplay plus livestream, equal parts anonymity and performance. Today, that experiment has scaled into an industry. YouTube reports that just 300 virtual creators draw more than 15 billion annual views a year, with major agencies like Hololive and Nijisanji generating blockbuster revenues. Analysts project the market to reach $4.5 billion by 2030. The mask has become a platform.

A virtual mask is not just a graphic overlay. It’s a tool that lets people both conceal and amplify themselves. Behind the mask, creators inhabit a “third presence,” neither their offline self nor pure fiction, but something in between.

Yet the ways people use these masks diverge.

Some treat avatars as a prism, projecting hidden aspects of themselves, exploring gender, experimenting with voice, performing the “self they always felt inside.”

Others treat avatars as costumes, inventing entirely new characters with quirks, backstories, and aesthetics far from their everyday lives.

Still others keep avatars close to home, recreating their own physical appearance almost one-to-one, as if carrying their offline body into the digital stage.

The mask, then, is both mirror and stage. It can be a doorway to self, a portal into fiction, or simply a continuation of one’s offline image.

Augmented Lookism

But not all masks are neutral. Bias doesn’t disappear when we step into digital space. It mutates.

Avatars on the major platforms are often flawless: kawaii, eternally youthful, immune to age, imperfection, or fatigue. They don’t just express identity. They prescribe it. The synthetic face becomes the baseline.

This is augmented lookism: when virtual masks amplify beauty standards beyond human limits. Instead of democratizing identity, they risk algorithmically standardizing it, creating replicas that begin to become treated as replacements for human faces. 

The risk is cultural as much as aesthetic. If perfect digital masks dominate, how do human creators — with their wrinkles, slips, and stumbles — measure up? And how do audiences, constantly comparing themselves to flawless avatars, begin to see their own faces? What crises of identity will emerge when you can “become” anyone, but only in certain contexts? To which self do you ascribe your “true self?”

Beyond Entertainment

When affection flows mainly to flawless, idol-like personas, we risk encoding those same beauty standards into both virtual masks and human culture. Augmented lookism becomes self-reinforcing. Our cheers, critiques, and comments teach us to accept authenticity as polished performance — and beauty as endlessly standardized through virtual masks.

These dynamics won’t stay confined to streaming. They are likely to spill into classrooms, clinics, workplaces, and eventually into many other parts of everyday life. A teaching avatar built on stereotypical “teacher” visuals could alienate students. A healthcare avatar modeled on idealized standards might deepen patient insecurity rather than provide comfort. Even in professional or civic spaces, digital stand-ins risk becoming so polished that they signal authority but lose relatability.

So the challenge is not only how these masks are received, but the assumptions built into their design. 

What roles do we imagine when we give a face to “teacher,” “caregiver,” or “leader”? And do those imagined masks liberate, or do they reinforce old biases? As virtual avatars increase in popularity, we must remain cautious about standardizing a certain set of assumptions about how and why someone should look. If the goal is for identities to proliferate, to allow for more generosity in our acceptance of others, we can’t risk the dark underside of our all-too-human tendency towards projecting perfectionism. 

When AI Wears the Mask

But what does this actually look like in practice? As identities become either fluid or calcified depending on how we treat them, we need to ask ourselves, what happens to our notion of selfhood when there’s no human behind the mask at all?

Neuro-sama, an AI streamer with no human operator, plays Minecraft and Osu! in front of thousands of people. Bloo, another AI-driven VTuber, mimics human banter and expression. 

Set against human-operated stars like Gawr Gura or Houshou Marine, these AI-first characters pose a provocative question: How do audiences respond to AI-driven characters?

A video game screen shotAI-generated content may be incorrect.
Neuro-sama playing a game during a live stream

To explore, I analyzed over 11,000 YouTube using the YouTube Data API v3. For each channel, the ten most recent videos were sampled, and up to 500 top-level comments per video were retrieved (including selected replies when available). Comments were divided across three identity types:

  1. Fully AI (Neuro-sama) – an autonomous machine persona.
  2. AI simulating human performance (Bloo) – an AI designed to feel “human-like.”
  3. Human-operated avatars (Gura, Marine, Kuzuha) – performers behind digital skins.

What the Comments Reveal

Patterns diverge clearly:

Fully AI (Neuro-sama)

Only ~4% of comments contained affectionate language (“love,” “cute”). In contrast, 17.5% explicitly referenced her AI nature — “AI,” “bot,” “uncanny.” Viewers framed her existence itself as the spectacle: fascinating, odd, sometimes unsettling.

AI as Human Mimic (Bloo)

34% of comments expressed affection (hearts, “love it!!”), while only 5.7% mentioned AI at all. Bloo is treated less as “technology” and more as an entertaining character. The mask successfully eclipses the machine.

Human Avatars (Gura, Marine, Kuzuha)

Roughly 22% of comments carried affection. Marine’s stream in particular saw “かわいい (kawaii)” appear in 22% of all comments. Here, audiences invested deeply in parasocial intimacy — thanking, cheering, and even confessing love.

AI performers are judged for what they are: oddly affective empty shells. Human performers are loved for who they are: people to whom someone can relate.

Imperfection as Sincerity

Even so, affection does emerge for AI channels over time. One fan wrote of Neuro-sama: “She’s so weird, but that’s why she’s lovable.” Here imperfection, the slip, the glitch, the mis-timed joke, reads as sincerity. A technical fault becomes a point of charm.

This flips a design instinct: instead of eliminating the uncanny, perhaps we should design for optimal strangeness, that sweet spot where AI quirks invite delight rather than distrust. 

Perhaps it is time we understand the unsettling part of AI, or its misfires and hallucinations, as something that, counterintuitively, endears us to it. Finding some sort of relational capacity in the interplay of chance and misinterpretation: fertile grounds for creativity. 

The True Self Paradox

Where we arrive at when considering the myriad ways in which identity is explored through the VTubers, from concerns about bias, to the unsettling, weird relationship we might form with an AI empty shell, is a paradox. 

For some, masks feel authentic: a safer way to be seen, even to feel more real than real life.

For others, masks are playful fictions: a chance to perform someone completely different.

Both experiences are valid. But both collide when avatars trend toward hyper-idealized forms. Whether you are becoming more yourself, or becoming someone else, the same gravitational pull applies. The template of perfection.

Liberation and conformity march in lockstep.

That is the paradox of synthetic authenticity. The feeling of being “truly ourselves” through masks that are never fully ours, masks that quietly script what authenticity should look like.

Curtain Call

Virtual masks are not about whether machines have selves. They are about the selves we allow to emerge when we perform through them.

Digital masks can be liberating, offering safety, creativity, and new forms of play. But they can also be prescriptive, embedding standards that no human can live up to.

Perhaps, then, we’re asking questions at the wrong altitude. Instead of trying to understand whether or not our avatars can make us more authentic, we should try and understand what we are doing with the identities we create for ourselves.

Think instead about the communities that form around these performers. The shared sense of identity of fandom, and the encouragement to explore the outer edges of one’s self that such communities provide. 

Then it would make sense to think of the future of identity as yes, a contrast between fluidity and rigidity, but also a way to rethink the ways that identities are never formed in isolation. That new collectives create new types of identities. That, though we may all contain multitudes, they are tempered by the multitudes we surround ourselves with. 

Because if the mask allows for a self to perform, then we need to ask: perform for whom?

Data and Method

This study draws on publicly available comments from official YouTube channels representing three different forms of virtual identity construction:

  1. Fully AI-generated identity – Neuro-sama (an autonomous AI streamer).
  2. AI simulating human persona – Bloo (an AI character designed to mimic human expression).
  3. Human-operated avatar identity – Gawr Gura (Hololive EN), Houshou Marine (Hololive JP), and Kuzuha (Nijisanji JP).

Data was collected in August 2025 using the YouTube Data API v3. For each channel, the ten most recent videos were sampled, and up to 500 top-level comments per video were retrieved (including selected replies when available). The resulting dataset comprises approximately 11,000 comments in total. Each record includes the comment text, publication date, and metadata such as like counts. Only textual content was analyzed; usernames or personal identifiers were excluded to ensure anonymity.

Comments were pre-processed by lowercasing and simple normalization. Two sets of keyword lexicons were then applied:

  • Affective expressions: tokens such as “love,” “cute,” “adorable,” “大好き,” “かわいい,” and heart emojis.
  • AI-related references: tokens such as “AI,” “bot,” “algorithm,” “uncanny,” “weird,” and related descriptors.

For each identity type, we calculated:

  • Affection rate: the proportion of comments containing affective expressions.
  • AI reference rate: the proportion of comments explicitly referring to the AI nature of the performer.
  • Sentiment distribution (for English comments): positive, negative, and neutral shares using the VADER sentiment analysis tool.
  • Language distribution: proportion of comments in English versus Japanese, identified via automated language detection.

This mixed-methods approach enables systematic comparison across the three identity categories, grounding theoretical debates on “AI as identity” in empirical discourse analysis of how audiences articulate affection, critique, and recognition of “AI-ness” in public comment culture.

Analyzed accounts 

https://www.youtube.com/@Neurosama/videos

https://www.youtube.com/@BlooGaming

https://www.youtube.com/@GawrGura

https://www.youtube.com/channel/UCCzUftO8KOVkV4wQG1vkUvg

https://www.youtube.com/@Kuzuha

Manage Preferences