Digital Marketing ยป Articles ยป Articles By ยป Proprietary Frameworks & IP ยป From Empathy to Intelligence: How One Principle Became a Complete Framework for the AI Age

From Empathy to Intelligence: How One Principle Became a Complete Framework for the AI Age

In 2015, I stood on a stage in Metz, France, and told a room full of SEOs that they were thinking about Google wrong. They were treating it as an adversary - something to trick, game, and outwit. I told them to have empathy for it instead.

They thought I was being provocative. I was being literal.

Eleven years later, that single principle - understand the system’s struggles, help it do its job, and it will reward you - has evolved into a complete framework for how brands survive and thrive in the AI age. Every concept I’ve developed since 2015 grows from the same root. This is the story of how one idea became a methodology, a platform, and an industry.

2015: Empathy for the Devil

SEO Camp, Metz, France

The insight was simple, and at the time, it felt almost naive: Google is not your enemy. Google is an overworked employee (of the company Google, now Alphabet) trying to satisfy an audience with impossible expectations. Every user expects the perfect answer instantly. Google can never fully deliver. The gap between expectation and delivery is permanent.

Most SEOs responded to this by trying to exploit Google’s limitations - stuffing keywords, building manipulative links, gaming the system. My argument was the opposite: if you understand Google’s job and help it do that job better, it will reward you. Not out of gratitude. Out of self-interest. You make it look less inadequate to the audience it can never fully satisfy.

I called it Empathy for the Devil - a deliberate nod to the Rolling Stones, and a deliberate provocation to an industry that treated the algorithm as the enemy.

The principle: understand the system’s constraints. Work within them. Help it succeed. And it helps you succeed.

At the time, this applied to one system: Google Search. The reward was better rankings. The mechanism was straightforward - give Google clean, structured, unambiguous information about your brand, and it would represent you accurately in search results.

But the principle was bigger than search. I just didn’t know it yet.

2017: Algorithms Are Children

SEO Camp, Lyon, France

Two years later, I pushed the idea further. If Google isn’t an adversary, what is it? The answer came from watching how the Knowledge Graph worked - how it accumulated understanding of entities through repeated, consistent signals from multiple sources.

It behaved like a child learning about the world. Not stupid. Eager. Absorbing everything. Building understanding through repetition and consistency. Getting confused by contradictions. Growing more confident as evidence accumulated.

I presented this at SEO Camp in Lyon: ร‰duquons Google - c’est un enfant en soif de connaissances.” Let’s educate Google - it’s a child thirsting for knowledge.

This shifted the frame from empathy to pedagogy. It was no longer just about understanding Google’s struggles. It was about taking responsibility for teaching it. The algorithm doesn’t understand your brand because you haven’t taught it properly. The Knowledge Panel is wrong because your digital ecosystem sends contradictory signals. The search results misrepresent you because the information you’ve put into the world is incomplete, inconsistent, or outdated.

The principle evolved: algorithms are students. You are the teacher. The quality of their understanding reflects the quality of your teaching.

This became the foundation of everything Kalicube would build. The Kalicube Processโ„ข isn’t a set of technical tricks. It’s a curriculum. You teach the algorithm who you are (Understandability), prove you’re credible (Credibility), and demonstrate you can deliver what the user needs (Deliverability). UCD. A teaching framework disguised as a marketing methodology.

2020: Darwinism in Search

By 2020, the search landscape had fragmented. Google wasn’t just returning ten blue links anymore. It was assembling rich results - Knowledge Panels, People Also Ask, Featured Snippets, image packs, video carousels. The SERP had become an ecosystem where different content formats competed for visibility.

The principle adapted. It wasn’t enough to teach the algorithm who you are. You had to teach it in the format it needed. The fittest format survives. A brand that provides only text when Google wants video gets filtered out. A brand that provides structured data when Google needs entity confirmation gets amplified.

Darwinism in Search: the format that best serves the algorithm’s needs in a given context wins the real estate.

This wasn’t a departure from Empathy for the Devil. It was the same principle applied to a more complex environment. The algorithm still has a job to do. It still needs help doing it. But now “helping” means understanding not just what information the algorithm needs, but how it needs that information delivered.

The reward shifted too. It was no longer just about ranking. It was about occupying SERP features - the rich results that increasingly dominated user attention. Brands that understood what the algorithm was trying to build on the results page, and provided content that fit that structure, won disproportionate visibility.

2024: The Untrained Salesforce

Then AI assistants arrived. And everything changed. And nothing changed.

By 2024, users weren’t just searching Google. They were asking ChatGPT, Perplexity, Claude, Gemini, Copilot, Siri, and Alexa. Seven AI systems, each one capable of recommending, describing, comparing, and evaluating brands. Seven systems working 24 hours a day, seven days a week, answering questions about your business.

Seven employees you never hired, never trained, and never managed.

I called them the Untrained Salesforce. Because that’s exactly what they are. AI assistants are employees - they just work for every company simultaneously. And like any untrained employee, they either sell for you or they sell for your competitors. The difference depends entirely on whether you’ve trained them.

The principle held perfectly: understand what these systems need. Help them do their job. And they perform for you.

But the scale changed dramatically. In 2015, I was talking about one algorithm (Google) with one output (search results) in one format (a web page). In 2024, I was talking about seven algorithms, each with different knowledge bases, different confidence levels, different response formats, and different ways of evaluating authority. The fundamental principle - empathy for the system, education of the system, helping the system do its job - was identical. The complexity of applying it had multiplied sevenfold.

The Untrained Salesforce metaphor also reframed the business conversation entirely. CEOs don’t care about algorithms. They care about employees and revenue. When I tell a CEO that seven AI employees are currently describing their company to prospects, and those employees were never trained, the response is immediate: “How do we train them?”

That question is the entire Kalicube business model. The answer is The Kalicube Process - the same UCD curriculum I built for Google’s Knowledge Graph, now applied to every AI system simultaneously.

2026: Make AI Less Disappointing

Which brings us to now. And the final evolution of the principle - so far.

I’ve spent the past year working with AI assistants daily, not just optimising brands for them but using them as working tools. Building with them. Collaborating with them. And watching them fail in ways that taught me something I should have understood from the start.

Every problem I discovered with AI assistants as a user maps directly to a problem I’d already solved for brands as an optimiser. The same architecture. The same failure modes. The same solutions.

Knowledge Rot - the silent degradation of an AI assistant’s knowledge base - is the same problem as a brand’s digital footprint going stale. The Confidence Fallacy - trusting confident AI output without verifying currency - is the same problem as users trusting what AI says about your brand without you verifying what it’s saying. The Colleague Fallacy - assuming AI has associative memory when it actually uses serial keyword retrieval - is the same problem as assuming AI “just knows” your brand has evolved.

The user side and the brand side are mirror images of the same system. And the principle that solves both is the one I articulated in 2015: understand the system. Help it do its job. Maintain what you’ve built.

But there’s a layer I hadn’t articulated until now, and it connects everything: the satisfaction gap is permanent, and your job is to make it smaller.

I call this the Eternal Dissatisfaction Cycle. Technology improves. Expectations rise faster. The gap between what users expect and what any system can deliver never closes. Google could never satisfy every searcher. AI assistants can never satisfy every user. Your brand’s AI representation can never be perfect across seven platforms simultaneously.

The goal was never perfection. The goal was always to close the gap as much as possible - to make the algorithm less disappointing to the user. That’s what Empathy for the Devil meant in 2015. That’s what it means in 2026.

Make AI less disappointing. That’s the job. It always was.

The Arc

Eleven years. Five frames. One principle.

YearFrameWhereMechanismReward
2015Empathy for the DevilSEO Camp, MetzHelp Google do its jobBetter rankings
2017Algorithms Are ChildrenSEO Camp, LyonEducate the algorithm like a child eager to learnAlgorithmic understanding
2020Darwinism in SearchProvide the fittest format for the contextSERP feature inclusion
2024The Untrained SalesforceTrain your seven AI employeesAI recommendation
2026Make AI Less DisappointingClose the satisfaction gapCitation, trust, revenue

Each row looks different. The audience changed. The technology changed. The business language changed. But read the “Mechanism” column from top to bottom, and it’s the same sentence rephrased five times: understand the system, help it succeed, and it helps you succeed.

That’s Empathy for the Devil. It was always Empathy for the Devil.

Why This Matters for You

If you’re an entrepreneur building AI assistants, the principle tells you: maintain your knowledge base the way you’d maintain an employee’s training. Knowledge Rot is the cost of neglect.

If you’re a professional using AI daily, the principle tells you: understand how the system retrieves and processes your input. The Colleague Fallacy exists because we treat AI like a colleague with shared context when it’s actually a search engine with conversational manners.

If you’re a brand managing your presence across AI platforms, the principle tells you: your digital footprint is the training material. Every outdated profile, every inconsistent description, every contradictory data point is a lesson that teaches the algorithm the wrong thing about you. And unlike a human employee who might question outdated training, the algorithm will confidently repeat whatever you taught it - even if you taught it something that stopped being true six months ago.

If you’re an SEO, the principle tells you: you already have the most valuable AI skillset in the room. The pipeline that powers search is the pipeline that powers AI. Your skills transfer directly. The only thing that changed is the interface.

And if you’re anyone at all who interacts with algorithms - which in 2026 means everyone - the principle tells you the simplest, most powerful thing I know about technology:

It’s not your enemy. It’s not magic. It’s a system that’s trying to do a job it can never fully succeed at. Have empathy for that. Help it. And it will help you.

That was true when I said it in Metz in 2015. It’s true now. It will still be true in 2035.

The technology changes. The principle doesn’t.


This article is part of a series on how humans and AI systems work together. Related reading:

Jason Barnard is the founder and CEO of Kalicube, a Digital Brand Intelligenceโ„ข company. He first told an algorithm to have empathy in 2015. He hasn’t stopped since.

Similar Posts