Your Digital Twin Is Not the Problem. Your Digital Shadow Is.
Why Building a Clone Misses the Point
Published: 26 February 2026 Author: Jason Barnard, CEO of Kalicube® Status: Strategic Sandbox
I watched a talk today by an identical twin about building your digital twin. She asked the audience a question that stopped me: “Am I a twin, or am I me?”
It’s a profound question when you have spent your entire life looking at a version of yourself from the outside. Most of us never get that perspective. She has had it since birth. She can see herself through another lens, notice what works, notice what does not, adjust. That is a rare gift, and I can see why it led her to this work.
(I would hate to be her sister, though. Imagine watching your identical twin build a career on the insight that looking at you revealed her own flaws. That is a conversation I would pay to overhear.)
Her thesis: build a digital twin of yourself. A controlled AI agent trained on your data, your voice, your values. Something that represents you when you are not in the room. Your face is your logo. Your twin is your ambassador. Design it deliberately, or the world will design one for you.
Stand Out. Stay Human.
It’s compelling. It’s well-packaged. And it solves exactly the wrong problem.
The Twin and the Shadow
Her digital twin is something you build. You choose the data. You train the voice. You control the output. Someone visits your site, interacts with your twin, and gets the version of you that you designed. Deliberate. Opt-in. One visitor at a time.
Your digital shadow is something that built itself. Google crawled everything it could find about you, decided what mattered, interpreted what it meant, and constructed a representation it serves to anyone who asks. ChatGPT did the same. So did Perplexity, Claude, Copilot, Siri, and Alexa. Seven AI systems, each constructing their own version of you, based on whatever they found, interpreted however they saw fit.
You did not build the shadow. You did not choose the data. You do not control the output. And it reaches a thousand times more people than any chatbot on your website ever will.
The twin is an ambassador you send into rooms you choose. The shadow is what every room already believes about you before you arrive.
“What Happens When You Are Not in the Room?”
She asked this question on stage, and it is the right question. It is the question I have been asking for ten years. But her answer is: build a twin to be in the room for you.
My answer is different. You do not need to put an agent in the room. The room is already talking about you. Seven AI systems are describing you to every person who asks, every hour of every day, whether you built a twin or not. The question is not “who represents me when I am not there?” The question is: “What is the room already saying about me, and is it accurate?”
Her twin talks to the people who find her. My work trains the systems that talk to the people who never will.
The Authenticity Problem
She’s selling authenticity. “Build an authentic digital twin.” But a clone you built of yourself and placed on your own website is the least authentic source imaginable. It is you, talking about you, on your property. AI systems know this. Humans sense it. That is why AI hedges when brands self-describe: “claims to be,” “describes itself as.”
You know what IS authentic to an AI system? Ten independent sources saying the same thing about you without coordination. That is not a clone. That is convergence. A chorus of independent voices agreeing on who you are and what you do. AI trusts the chorus. AI hedges at the monologue.
A clone is a monologue. A shadow trained by convergence is a chorus. And the chorus is what AI presents to the world.
The Clone Only Works If You Force It
Nobody wakes up and thinks “I want to talk to a clone of this person today.” They wake up and ask ChatGPT “who is the best speaker on personal branding?” The shadow answers. Not the twin.
The twin only works if you force the interaction. Put it on your homepage. Gate your content behind it. Make it the only way to book you. You are not meeting demand. You are manufacturing a tollbooth.
The shadow works without anyone choosing it. It is the default. Seven AI systems, always on, answering every question about you to everyone who asks, whether they know you exist or not. No tollbooth. No opt-in. No friction.
Her model: build your clone, put it on your website, hope people come, hope they interact, hope they prefer the clone to just asking ChatGPT about you directly.
My model: the room is already talking, train the room, everyone who asks gets an accurate answer, no clone required, no visit required, no friction.
“It Is Hard to Hear Yourself”
She said this while explaining why building a twin is difficult. You need to feed the AI massive amounts of your data: your voice recordings, your writing, your values, your stories. The more data, the better the twin.
I disagree. They don’t need massive data. They need clean data. Humans are messy by nature. We contradict ourselves, we evolve, we say one thing on stage and another at dinner. Feeding massive amounts of messy data into an AI does not produce an accurate twin. It produces a confident but confused one.
The digital shadow has the same problem, but at scale. AI systems consume everything they can find about you, and most of it is messy, outdated, contradictory, or irrelevant. The solution is not more data. The solution is clean, consistent, well-framed data on properties you control. That is what trains the shadow to be accurate. Quality, not quantity. Consistency, not volume.
Eternal Life and the IP Question
She’s capturing her father’s voice. He died two years ago, and she is building a twin from his writing and videos so that future generations can benefit from his wisdom. Her children are in the same boat: their digital twins are being constructed now, for a future that has not arrived yet.
It’s moving. It’s genuinely interesting work. And it raises a question nobody in the room asked: who owns the intellectual property?
When you build a twin of yourself, you own it. Clear. But when you build a twin of your dead father, who owns the output? Who controls what the twin says? Who decides what the twin should not say? Who holds the trademark on a dead man’s digital identity?
And when the twin outlives the person who built it, who inherits the maintenance? Who updates it? Who decides when the twin’s opinions are outdated and need correction? A dead person’s twin will confidently express views from 2024 in perpetuity, long after the world has moved on.
The digital shadow has the same problem, but it is self-correcting. AI systems update their representations as new evidence arrives. A shadow evolves. A twin is frozen at the moment it was last trained.
The One Thing She Got Right
Near the end of the talk, she said something that landed: “We all have digital twins already, but they are implicit.”
That is it. That is the whole point. And it is the first thing she said that sits.
Everyone already has an implicit digital twin. It is the representation that AI systems have constructed from whatever they could find. It isn’t designed. It isn’t controlled. It isn’t even acknowledged by most people. But it exists, it is being served to the world, and it is shaping decisions about you right now.
She calls them implicit twins. I call them digital shadows. The terminology doesn’t matter. What matters is the question that follows.
The Question Nobody Is Answering
How do you control your implicit digital twin?
Not the one you build on purpose with Delphi or Personal.ai or WinnieHart.ai. The one that already exists. The one that Google, ChatGPT, Perplexity, Claude, Copilot, Siri, and Alexa have already constructed. The one that is being served to your prospects, your partners, your competitors, your future employer, your children’s friends’ parents, right now, without your knowledge or consent.
She wants to build a personal twin and a professional twin. Separate personas, separate data, separate interactions. That is an interesting product decision. But it ignores the fact that seven AI systems have already decided what your personal and professional personas are, and they did not ask you to separate them. They merged everything they found and served the result as a single representation.
You don’t get to choose how many twins you have in the shadow. You get to train the shadow to be accurate, or you get whatever the machine decides.
The designed twin conversation is valuable. Businesses should explore it. The eternal life question is fascinating and raises IP challenges that will take decades to resolve.
But all of it is secondary to the foundational question that almost nobody is asking.
The Biggest Identity Experiment in History
We are living through it. Every person with a digital footprint now has a machine-constructed identity that is served to millions of people without their knowledge, consent, or oversight. The algorithms decide what is true about you. The algorithms decide what is important about you. The algorithms decide whether you are trustworthy, relevant, and worth recommending.
The designed twin is an interesting response to this reality. Build your own version before the machine builds one for you. But the machine has already built one. For everyone. The shadow exists. It is being served right now.
The designed twin is a feature. The digital shadow is the foundation. Fix the foundation first. Then build whatever features you want on top of it.
Seven AI systems are describing you to the world right now.
Have you checked what they are saying?
Jason Barnard is CEO and founder of Kalicube, a Digital Brand Intelligence™ consultancy. He has researched how algorithms decide who to trust and recommend since 1998. He is the inventor on 16 pending patent applications (INPI) related to diagnostic methodologies used in Kalicube’s platform. He frequently speaks at industry conferences about Google Search and AI brand representation.