Digital Marketing ยป Articles ยป Articles By ยป The Strategy Sandbox ยป Making AI Less Disappointing: The Eternal Dissatisfaction Cycle and Why the Brand That Helps AI Perform Wins Everything

Making AI Less Disappointing: The Eternal Dissatisfaction Cycle and Why the Brand That Helps AI Perform Wins Everything

Author: Jason Barnard, February 2026 Builds on: “Empathy for the Devil” (2015), Untrained Salesforce, Darwinism in Search Rolling Stones debt: Two songs, one decade, one idea


The Insight in One Sentence

You don’t win by getting AI to recommend you. You win by making AI less disappointing to humans who will never be satisfied.


How I Got Here

In 2015, I stood on a stage and told a room full of SEOs to have empathy for Google. I called it “Empathy for the Devil” - a deliberate provocation. At the time, every SEO conversation cast the algorithm as the enemy. We were at war with the machine.

My argument was the opposite: Google has a job to do. It’s trying to satisfy users who have unrealistic expectations. If you help Google do its job better, it will reward you - not out of gratitude, but out of self-interest. You make it look less crap to an audience it will never fully satisfy.

The industry thought I was naive.

Eleven years later, I’m watching the same principle play out on a scale I didn’t anticipate. And it took a conversation with Bernadeth to crystallise what had been nagging at me for months.


Part 1: The Eternal Dissatisfaction Cycle

Humans are spoilt. That’s the easy answer. But the real answer is more interesting and more useful.

We are permanently dissatisfied with technology. Every improvement resets the baseline. What was magical yesterday is expected today and inadequate tomorrow. This isn’t entitlement - it’s the hedonic treadmill applied to machines.

I’ve watched it happen three times now:

The search version (2005-2024):

  • 2005: “Google found the answer on page 1!” โ†’ Amazing
  • 2010: “Google found the answer in the top 3!” โ†’ Expected
  • 2015: “Google gave me the answer without clicking!” โ†’ Expected
  • 2020: “Google gave me the wrong featured snippet!” โ†’ Furious
  • 2024: “AI gave me a synthesized answer with citations!” โ†’ Amazing
  • 2025: “AI gave me a slightly hedged answer!” โ†’ Furious

The technology improved by orders of magnitude. User satisfaction stayed flat or declined.

The AI version (2023-now):

  • Stage 1: “It can hold a conversation!” โ†’ Marvel
  • Stage 2: “I need to give it context for good answers” โ†’ Acceptable
  • Stage 3: “Why do I need to give it context? It should just know” โ†’ Frustrated
  • Stage 4: “It hedged on something it should be certain about” โ†’ Angry
  • Stage 5: “It confidently recommended my competitor” โ†’ Catastrophe

A couple of years ago, the idea that we could have a meaningful conversation with a machine wasn’t even on the table. Now we’re furious when it hedges.

The Paradox That Makes This Worse

Here’s what I find fascinating: the better AI gets, the lazier we get with our inputs. When search was bad, we learned Boolean operators and crafted careful queries. When search got good, we typed “that restaurant near the thing.” When AI got conversational, we stopped giving context because we trusted it to just know.

And then the outputs got worse - not because the AI got dumber, but because we gave it less to work with. And then we were furious.

IMPROVEMENT โ†’ TRUST โ†’ LAZINESS โ†’ WORSE RESULTS โ†’ FRUSTRATION
     โ†‘                                                    โ”‚
     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                    (cycle repeats, faster each time)

Our expectations move forwards faster than the technology. Even if the technology speeds up, our expectations will always move faster. Not because we’re spoilt. Because once something works, it becomes invisible. Nobody marvels at electricity anymore. The moment AI gave us a good conversation, “good conversation” became the floor, not the ceiling.

This cannot be fixed. It is human nature. But it can be exploited by the brand that understands it.


Part 2: Empathy for the Devil - Then and Now

2015: The Original Principle

Help the algorithm achieve its role. It will reward you because you make it look good to an audience it can never fully satisfy.

That was the argument. Google has a job. Its job is to satisfy users. Users have unrealistic expectations. Google can never fully meet them. But the brand that helps Google get closer to meeting them earns preferential treatment - because Google’s survival depends on user satisfaction, and you just contributed to user satisfaction.

The industry ignored it. “Have empathy for Google” didn’t fit the adversarial mindset. People wanted hacks, not philosophy.

2026: The Same Principle, Higher Stakes

Help AI be less disappointing. It will cite you, recommend you, and advocate for you - because you’re the brand that closes the gap between what the human expects and what the machine can deliver.

Same principle. But in 2015, Google had 10 blue links to fill and plenty of content to choose from. In 2026, AI has 3-7 citation slots and the user expects a perfect answer in one turn. The margin for disappointment has collapsed to almost nothing.

The brand that helps AI perform closest to the user’s unreasonable expectations doesn’t just win a ranking position. It wins the deal. Because the frustrated, impatient human will simply take the answer that feels right and move on. They won’t investigate why one answer felt confident and another felt hedged. They’ll just act on the confident one.


Part 3: The Satisfaction Gap Is the Real Competitive Battlefield

The industry thinks competition happens between brands. Brand A vs Brand B for the citation slot.

Wrong frame.

Competition happens between what the user expects and what AI delivers. The brand that narrows that gap wins. Not because the AI chose you over a competitor. Because you made the AI capable of meeting the user’s unreasonable expectations.

USER EXPECTATION โ†โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ THE GAP โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ†’ AI CAPABILITY
(always rising)              โ†‘                (improving, but slower)
                             โ”‚
                    THE BRAND THAT
                    CLOSES THIS GAP
                    WINS EVERYTHING

The user doesn’t know why one answer felt satisfying and another felt vague. They don’t see the annotation confidence scores. They don’t see the corroboration checks. They just feel it. One answer satisfied. One didn’t. They go with the brand that made the AI sound confident.

The brand that did the work is invisible in the transaction but decisive in the outcome.

Three Levels of Gap-Closing (The UCD Framework as Satisfaction Architecture)

This is what I’ve been building for a decade without fully articulating it this way. The UCD framework isn’t just a scoring system. It’s a satisfaction framework.

Level 1: Close the Comprehension Gap (Understandability) AI can’t recommend what it doesn’t understand. Ambiguous entities, inconsistent descriptions, pronoun-heavy content - these create confusion. Confused AI hedges. Hedged answers disappoint users. Disappointed users move on.

When the comprehension gap is closed: AI describes you accurately. No hedging. No “claims to be.” No entity confusion. First layer of satisfaction delivered.

Level 2: Close the Confidence Gap (Credibility) AI understands you, but should it stake its reputation on you? If corroboration is thin, the AI protects itself. “According to their website…” The user reads the qualifier as doubt. Doubt is disappointing.

When the confidence gap is closed: AI states your claims as fact because third-party sources confirm them. The user reads confidence. Confidence satisfies. Second layer delivered.

Level 3: Close the Recommendation Gap (Deliverability) AI understands and trusts you. But does it know WHEN to recommend you? For which queries? To which user? If it doesn’t have the mapping between user need and your solution, it can’t close the gap - even if it trusts you completely.

When the recommendation gap is closed: AI recommends you unprompted when a user describes a problem you solve. No prompt engineering from the user. The AI just knows. Third layer delivered. Full satisfaction.

Each level makes AI less disappointing. Each level makes the user’s experience better. Each level makes the brand more likely to be cited, recommended, advocated.


Part 4: Two Songs, One Decade, One Principle

I didn’t plan this. But it’s too good not to use.

1968: Sympathy for the Devil โ†’ Empathy for the Devil (2015) Have empathy for the algorithm. It has a job. Help it do the job. Get rewarded.

1965: (I Can’t Get No) Satisfaction โ†’ The Eternal Dissatisfaction Cycle (2026) Users will never be satisfied. The gap between expectation and delivery is permanent. The only question is: which brand narrows it?

Two Rolling Stones songs bookending a decade of the same idea. Sympathy for the Devil gave me the frame for understanding algorithms. Satisfaction gives me the frame for understanding why the humans on the other side will never be happy - and why that’s the biggest opportunity in digital marketing today.

The Uncomfortable Truth for Experts

Here’s where this gets personal. I’ve spent 27 years training AI to understand who I am and what I know. And it worked - perhaps too well.

Ask ChatGPT “What would Jason Barnard recommend for my brand’s AI visibility?” You’ll get a frighteningly accurate answer. Ask it for Aleyda Solis, Lily Ray, Kevin Indig - same. The best experts have pre-trained AI so thoroughly that their strategic frameworks are now public knowledge, delivered for free, instantly.

The strategy is no longer the moat. AI can deliver expert-level strategy to anyone.

But here’s what AI can’t do: implement it. Customise it. Adjust it when data contradicts theory. Hold someone accountable. The experts who have trained AI the best have inadvertently proven that their frameworks work - because the AI applies them correctly to anyone’s problem. That’s the ultimate validation. And it means the value shifts from “I’ll tell you what to do” to “I’ll make sure it actually happens.”

The brand that makes AI less disappointing has already done the strategic work. The client sees the proof in the AI’s answer. The engagement becomes about execution, not education.


Part 5: The Evolution of One Idea (2015-2026)

YearFrameMechanismReward
2015Empathy for the DevilHelp Google do its jobBetter rankings
2020Darwinism in SearchFittest format survivesSERP feature inclusion
2024Untrained SalesforceTrain your AI employeesAI recommendation
2026Make AI Less DisappointingClose the satisfaction gapCitation, trust, revenue

Each is the same principle through a different lens:

  • 2015: The algorithm’s perspective (help me)
  • 2020: The content’s perspective (survive or die)
  • 2024: The brand’s perspective (train your salesforce)
  • 2026: The user’s perspective (satisfy me or I leave)

The 2026 frame is the most powerful because it’s the one the C-suite understands immediately. Nobody in the boardroom cares about annotation confidence scores. Everyone cares about “our customers are disappointed and choosing the competitor.”


Part 6: The Strategic Sequence

Step 1: Audit the Disappointment Where does AI hedge on your brand? Where does it confidently recommend competitors? Where does the user’s query meet a vague or incomplete answer? Every hedge is a measurable gap.

Step 2: Close the Comprehension Gap (U) Eliminate ambiguity. Resolve entity confusion. Make every claim standalone and quotable. AI can’t satisfy users about you if it doesn’t understand you.

Step 3: Close the Confidence Gap (C) Build corroboration from trusted third-party sources. Move from “claims to be” to stated fact. AI can’t satisfy users about you if it doesn’t trust you.

Step 4: Close the Recommendation Gap (D) Map your solutions to user needs explicitly. Ensure AI knows WHEN to recommend you, not just WHO you are. AI can’t satisfy users about you if it doesn’t know when you’re relevant.

Step 5: Measure the Satisfaction Delta Track how AI’s confidence in your brand changes over time. Track competitive displacement. Track the user experience: does the AI answer about you feel satisfying or hedged?


Part 7: How I’d Pitch This

To the C-Suite

“AI platforms are answering questions about our industry 24/7. When they hedge on us and sound confident about competitors, we lose deals we never even knew existed. We need to make AI confident enough about us to give satisfying answers. That’s what this investment does.”

To a Conference Audience

“In 1965, the Rolling Stones sang ‘I Can’t Get No Satisfaction.’ Sixty years later, that’s still the user’s relationship with technology. AI will never fully satisfy. The question isn’t whether users will be disappointed - they will. The question is: when AI talks about YOUR industry, does it sound confident or does it hedge? The brand that makes AI sound confident wins. The brand that leaves AI uncertain loses to whoever did the work. You don’t win by gaming AI. You win by making AI less disappointing. And that starts with empathy - empathy for the devil.”


The Bottom Line

In 2015, I told a room full of SEOs to have empathy for Google. They thought I was naive.

In 2026, the principle hasn’t changed. The scale has. AI is trying to satisfy humans who will never be satisfied. The gap between unreasonable expectations and machine capability is permanent. The brand that closes that gap gets cited, recommended, advocated - not because it gamed the system, but because it made the system work.

Sympathy for the Devil. Satisfaction guaranteed. Same band. Same principle. A decade apart.

You don’t optimise for AI. You optimise for the disappointed human on the other side of it.


Jason Barnard, CEO of Kalicubeยฎ “Empathy for the Devil” (2015) โ†’ “Making AI Less Disappointing” (2026) The Kalicube Processโ„ข™: Understandability โ†’ Credibility โ†’ Deliverability Or, in satisfaction terms: Comprehension โ†’ Confidence โ†’ Recommendation

Similar Posts