Digital Marketing ยป Articles ยป Articles By ยป Proprietary Frameworks & IP ยป What I Learned by Talking to Algorithms for Twenty-Eight Years

What I Learned by Talking to Algorithms for Twenty-Eight Years

By Jason Barnard | February 2026 | Category: Proprietary Frameworks

My colleague Bernadeth Brusola recently wrote a piece tracing the evolution of my work from a provocative conference talk in 2015 to a complete framework for the AI age. I wrote the same timeline from my own perspective. Between us, the chronology is covered. What neither piece captures is the part only I can tell from the inside: what it actually felt like at each turning point. What I failed to do that I should have done sooner. What surprised me. And what I’d tell you to do differently if you started today.

I launched my first website - boowakwala.com - in 1998, the same year Google was incorporated. That site evolved into uptoten.com, which became the 10,000th most visited website in the world according to Alexa (back when Alexa was a browser extension and traffic measurement tool, before Amazon bought it). I’ve been talking to algorithms ever since. Not studying them from the outside. Talking to them. Feeding them information, watching what they did with it, adjusting, repeating. Twenty-eight years of conversation with machines that can’t talk back but show you exactly what they think through what they do.

This is the version of the story that doesn’t fit in a timeline.

The Idea Came Before the Name. And Before the Stage.

I need to get the causality right, because I’ve told this story imprecisely before.

The principle - help Google because it has restrictions and limits, and it will help you - didn’t arrive in 2015. It didn’t arrive in 2012. It goes back to 1998, to Boowa & Kwala. When I launched boowakwala.com, I was a musician running a children’s website. I knew nothing about SEO theory. But I learned quickly that the algorithm was a system with constraints, and if I helped it understand what my site was - these characters, this content, this audience - it would show that site to the people who needed it. That site became one of the 10,000 most visited in the world. Not through tricks. Through clarity.

That experience taught me the principle through the positive path: help the algorithm understand you, and it rewards you with visibility. But what truly made the principle urgent - what turned it from pragmatic observation into professional obsession - were two crises that proved the inverse.

The first hit in 2012. When I left the Boowa & Kwala project and tried to rebrand as a digital marketer, Google’s Knowledge Graph - then in its infancy - refused to cooperate. It had spent years learning that “Jason Barnard” meant cartoon blue dog. Children’s characters. Animated songs. Now I was telling it I was a marketing consultant, and it didn’t believe me. My Knowledge Panel showed a blue dog. The algorithm wasn’t wrong - it was working with what I’d taught it. I’d educated the child brilliantly about the wrong identity, and now it was stubbornly loyal to that education. The principle - help it and it helps you - has a corollary that nobody talks about: if you teach it wrong, it defends the wrong answer.

The second crisis was worse. In 2015, another person named Jason Barnard was caught driving dangerously. The story got picked up by mainstream media. For several months, the majority of search results for my name were negative news about someone I’d never met. Google couldn’t reliably distinguish between us. I estimate that confusion cost me several hundred thousand dollars in lost business during those months - potential clients researching me saw criminal news stories and moved on. (I wrote about this for Rolling Stone Culture Council because a decade later, the same problem is exponentially worse in the AI era.)

Two crises, same principle, opposite directions. The blue dog was Google misunderstanding what I am. The namesake was Google confusing who I am. Both proved the same thing: if you don’t actively educate the algorithm, it will represent you however it wants - or worse, however someone else’s data suggests. Empathy for the algorithm wasn’t a philosophical stance I arrived at through reflection. It was a survival skill I developed because the alternative was losing my identity and my income.

Those crises are what drove me to implement schema markup on my own site, link it to Freebase with sameAs, create a canonical URL that told Google exactly who the entity was and where the authoritative source lived. The Entity Home concept - years before I named it - was the direct solution to the blue dog problem and the namesake problem simultaneously: give Google a single, clean, authoritative place to start, so it has no room for confusion. I used traditional SEO skills and within four months I’d seized back control of my Brand SERP from the other Jason Barnard. The principle worked. It had always worked. But now I had proof of what happened when you didn’t apply it - and the cost was measurable.

By the time I stood on a stage in Metz in 2015, I wasn’t exploring an idea. I was executing one I’d been living for over fifteen years, and defending it against two identity crises that had made it personal.

The Metz 2015 slide deck shows all of it - the blue dog recovery, the namesake defence, the Entity Home, the Knowledge Graph feeding. The slide said “Fournissez un maximum d’informations au Knowledge Graph” and “Assurez la cohรฉrence des informations.” That’s Entity Home, UCD, and The Kalicube Processโ„ข - years before I coined those terms. Not as ideas I was testing. As battle-tested practices born from two identity crises. And between 2015 and 2019, six slide decks uploaded to SlideShare trace the steady evolution: from entity mapping and brand reputation management through Knowledge Graph theory and the child metaphor to Answer Engine Optimisation and structured data mechanics. The principle stayed constant. The applications multiplied.

And it was watching Google respond to this - watching it go from confused to confident about who I was, watching it build a Knowledge Panel that accurately reflected what I’d taught it - that made me realise I was educating a child. The implementation came first. The metaphor came from observing the implementation work.

The name “Empathy for the Devil” has roots I can trace but not always prove. The “devil” part comes from the Rolling Stones - I played in The Barking Dogs through the late 1980s and 1990s, and we covered Stones songs regularly. The lyrics were part of my working vocabulary long before I applied them to algorithms. The shift from “sympathy” to “empathy” was deliberate: sympathy is watching from outside with pity; empathy is stepping inside the system to understand its logic. That distinction matters. I wasn’t asking marketers to feel sorry for Google. I was asking them to think like Google.

The earliest version was “empathy for the beast” - I was using that framing with clients as far back as the early 2010s, during the Panda and Penguin chaos when every SEO felt like the algorithm was an unpredictable predator. In a podcast interview I referenced having used the “empathy for the beast” framing for about eight years prior, which places the oral debut around 2011-2012. That tracks with my memory. But none of that was published.

By SMS Sydney in 2019, it was on a slide: “Empathy (for the devil)” - the full Rolling Stones nod that gave it its edge. But that slide deck has no verifiable online timestamp. The earliest provable online instance is a podcast with Adam Helweh in November 2020, where I said it on the record: “And I say empathy for the devil, it’s having empathy and sympathy for Google and saying, what are your problems?” The full transcript is on my site. And then in October 2020, Search Engine Journal published my article that formally codified “empathy for the beast” alongside UCD as a public framework for the first time. That’s when the metaphor stopped being a coaching tool and became an industry framework. Incidentally, SMS Sydney was also where Gary Illyes gave the talk about how ranking works that later inspired my concept of Darwinism in Search - a nice coincidence.

I mention the temporal proof gap because it matters. I’ve spent a decade teaching that algorithms credit the earliest credible source. And here I am, unable to prove the exact moment I first said the phrase that anchors everything I’ve built. The principle was complete before 2015. The implementation was mature by 2015. The name was provably online by 2020. Years of the right idea, fully formed and in production, with no searchable label attached to it.

What I got wrong wasn’t the idea - the idea was right from the start. What I got wrong was not giving the principle a memorable, publishable name sooner. The ideas were executing beautifully. The label that made them communicable - “Empathy for the Devil” - didn’t appear in a citable source until years after the work it described was already producing results. That’s a lesson I’ve since formalised as Naming for the Listener: the concept is only as powerful as the words that carry it - and words that aren’t published don’t exist for algorithms.

The Child Metaphor Changed Everything - Including My Own Thinking

By 2017, I was deep in the Knowledge Graph. I’d been studying how Google built its understanding of entities - people, companies, organisations - and I noticed something that felt important: the pattern wasn’t mechanical. It was developmental.

Google didn’t just store facts about entities. It accumulated confidence. It started uncertain, hedging its descriptions. As more consistent signals arrived from more trusted sources, it grew bolder. It moved from “claims to be” to “is.” From tentative to definitive. From confused to confident.

That’s not how a database works. That’s how a child learns.

When I presented ร‰duquons Google - c’est un enfant en soif de connaissances” in Lyon, I was describing what I’d observed. But the metaphor did something I didn’t expect: it changed how I thought about my own work.

Before Lyon, I was optimising for an algorithm. After Lyon, I was teaching a student. That sounds like the same thing. It isn’t. When you optimise, you focus on outputs - rankings, visibility, features. When you teach, you focus on understanding - does the student actually comprehend what I’m trying to convey? The shift in framing changed every decision I made.

The UCD framework came directly from this. Understandability first - not because it’s the most exciting part, but because you can’t teach credibility to a student who doesn’t understand who you are. The order matters. The foundation matters. You build understanding, then credibility, then deliverability. Bottom up. Like any good curriculum.

What surprised me: the child metaphor resonated far more widely than Empathy for the Devil. “Have empathy for Google” is abstract. “Google is a child who wants to learn” is concrete. People could immediately see their role - teacher - and the implications - you’re responsible for what it learns. The metaphor did the strategic thinking for them.

This taught me something about communication that I’ve since formalised as Naming for the Listener: the words that work aren’t the ones that describe the mechanism. They’re the ones that make the listener feel the concept without needing it explained.

Darwinism in Search Was the Most Original Idea. I Named It So Nobody Would Notice.

Darwinism in Search is the idea I’m proudest of, and the one most people underestimate - including, sometimes, me when I tell the story. The insight was genuinely new: SERP features compete for real estate through a survival-of-the-fittest dynamic. Google doesn’t just rank blue links anymore. It evaluates candidates - web results, video boxes, image packs, Knowledge Panels, People Also Ask, featured snippets - and the format that best fits the context wins the space. The others go extinct. Not temporarily. For that query, for that moment, they simply don’t exist.

That’s not an application of someone else’s theory. That’s an original observation about how modern search results are assembled. And it has deep implications: if you’re optimising for a format Google has already decided isn’t the fittest for your context, you’re not just losing a ranking battle - you’re competing in a category that’s been eliminated from the results page entirely. You’re a species trying to survive in an ecosystem that no longer has a niche for you.

The parallels with biological evolution go deeper than the surface metaphor. Natural selection operates on variation, inheritance, and differential survival. SERP feature selection operates on format variation, historical performance data, and contextual fitness. Species don’t choose to evolve; they’re selected by environmental pressure. Content formats don’t choose to rank; they’re selected by algorithmic pressure. The mechanism is genuinely analogous, not just poetically similar.

But here’s why I named it “Darwinism” rather than something technically precise like “Contextual Format Selection Theory”: because the SEO industry in 2020 was obsessed with keywords and links. Those were the two levers everyone understood. Telling an SEO professional “SERP features undergo contextual format selection based on user intent signals” would have produced glazed eyes. Telling them “it’s Darwinism - survival of the fittest format” produced instant understanding. Every SEO knows what natural selection means. The name did the teaching before the explanation even started.

That was a Naming for the Listener move before I had the phrase for it. I chose the name entirely for the person hearing it, not for technical precision. The precision was underneath. The label was pure communication. And it worked - “Darwinism in Search” spread because people could explain it to each other without needing me in the room.

What I learned: the most original idea in the sequence was also the one I instinctively named best. Empathy for the Devil took years to crystallise from “empathy for the beast.” The child metaphor was always clear but hard to operationalise. Darwinism landed immediately because the name carried the entire concept. That should have taught me to name everything for the listener from the start. It took me four more years to formalise that lesson.

The Untrained Salesforce Was the Moment It Stopped Being About SEO

In 2024, I had a conversation with a CEO - not an SEO, not a marketer, a CEO - about why his company was losing deals to a competitor that had an objectively worse product. The competitor kept showing up in AI recommendations. His company didn’t.

I’d been explaining the same phenomenon in SEO terms for years: entity disambiguation, Knowledge Graph optimisation, structured data, citation coherence. It was technically accurate. And it was completely useless in that conversation. The CEO didn’t care about entities or Knowledge Graphs. He cared about revenue.

So I said: “You have seven employees you never hired who are talking to your prospects right now. Google, ChatGPT, Perplexity, Claude, Copilot, Siri, Alexa. They work 24/7. And they’ve never been trained. They’re either selling for you or selling for your competitor. Right now, they’re selling for your competitor.”

The response was instant: “How do we train them?”

That moment taught me more about communication than a decade of conference talks. The principle hadn’t changed since 1998. The technology had evolved enormously. But what actually changed the conversation was the language. “Untrained Salesforce” communicated in three seconds what “entity-centric Knowledge Graph optimisation for AI assistive engines” had failed to communicate in ten years.

This is when I truly understood what I’d later call Naming for the Listener. The CEO didn’t need to understand the mechanism. He needed to feel the urgency. “Untrained Salesforce” bypassed the technical barrier entirely. He didn’t need a glossary. He needed a phone call to his team.

What I got wrong before this moment: I’d been naming things for my peers. Entity Home. Algorithmic Reconciliation. AI Assistive Engine Optimization. Technically precise. Practically useless outside the SEO room. The Untrained Salesforce was the first time I named a concept entirely for the person hearing it, with zero concern for technical precision. The precision was underneath. The label was pure communication.

2026: The Year the Two Worlds Collided

And then this year happened.

I started using AI assistants intensively - not for brand optimisation, but as working tools. Building with Claude. Brainstorming with Gemini. Drafting with ChatGPT. And I started experiencing, as a user, every problem I’d been solving for brands.

My AI assistant’s knowledge went stale. I didn’t notice because it still sounded confident. I gave it vague prompts because I assumed it had shared context from our previous conversations. It didn’t - it searched for fragments, assembled something plausible, and sounded sure of itself. I blamed the AI for getting dumber. It wasn’t getting dumber. My knowledge base was getting older.

I had named this exact problem for brands. I called it Knowledge Rot. I described The Confidence Fallacy - trusting confident AI output without checking currency. I identified The Colleague Fallacy - assuming the AI remembers like a colleague when it actually retrieves like a search engine.

And then I fell victim to all three. In the same conversation where I was naming them.

That’s when the two worlds collided. I realised that every concept I’d developed for brand optimisation had an exact mirror in AI collaboration. The digital footprint IS a knowledge base. Brand messaging IS training material. Maintaining your Kalicube Process implementation IS preventing Knowledge Rot. The pipeline is identical. The user experience is identical. The failure modes are identical.

The principle I’d already been applying since the late 1990s - understand the system, help it do its job, maintain what you’ve built - applies to both sides of the same coin. Whether you’re training AI to represent your brand or training AI to help you do your work, the approach is the same: empathy for the system’s constraints, education through clear and consistent information, and ongoing maintenance because the system never stops needing current input.

What I’d Tell the Person Who Already Had the Answer

If I could go back to 1998 and tell myself one thing, it would be: you’re right about the principle, and it’s much bigger than you think. It’s not about Google. It’s not even about algorithms. It’s about the relationship between humans and any system that processes information on their behalf.

Every information system - search engine, Knowledge Graph, AI assistant, recommendation algorithm - has the same fundamental constraint: it can only work with what it’s been given. The quality of its output is bounded by the quality of its input. When output degrades, the instinct is to blame the system. The reality is almost always that the input has degraded.

Have empathy for that. It’s a system trying to do a job it can never fully succeed at, using whatever information you provided, which is probably incomplete, probably inconsistent, and almost certainly out of date.

Help it anyway. Because every other option is worse.

I’d also tell myself: name things for the listener sooner. I wasted years with terminology that impressed my peers and confused everyone else. The Untrained Salesforce communicated more in one conversation than Entity Home communicated in five years. The Glossary Test would have saved me a lot of apologising. And the irony is that the Metz slide deck in 2015 already contained every concept - Entity Home, UCD, The Kalicube Process - but I didn’t have names for any of them. The ideas were right. The labels took years to catch up. And worse: even when I found the right label, I didn’t always publish it where algorithms could find it. “Empathy for the Devil” existed in my head and on conference slides for years before it appeared in a searchable, citable source online. That’s the most expensive kind of Knowledge Rot - the kind you inflict on yourself.

And I’d tell myself: it’s going to take twenty-eight years, and you’re going to say the same thing five different ways, and each time you’ll think you’ve finally said it clearly enough. You haven’t. Say it again. The consistency is the point. The repetition is the proof.

What Comes Next

I don’t know what the next frame will be. In 1998 I couldn’t have predicted the child metaphor. In 2017 I couldn’t have predicted the Untrained Salesforce. The technology changes in ways that make the principle relevant in contexts I can’t anticipate.

But I know the principle won’t change. It hasn’t changed in twenty-eight years. It won’t change in the next twenty-eight.

Understand the system. Help it do its job. Maintain what you’ve built. Have empathy for something that’s trying its best with what you gave it.

That’s it. That’s the whole framework. Everything else is application.


For the full timeline with evidence - the dates, the locations, the evolution table - read Bernadeth Brusola’s companion piece: From Empathy to Intelligence: How Jason Barnard’s Single Principle Became a Complete Framework for the AI Age, and my own version of the same timeline.

This article is part of a series on how humans and AI systems work together:

Jason Barnard is the founder and CEO of Kalicube, a Digital Brand Intelligenceโ„ข company. He launched his first website the same year Google was incorporated. He’s been helping algorithms do their jobs ever since - and asking the rest of the industry to do the same.

Similar Posts