The Colleague Fallacy: Why You’re Talking to Your AI Assistant Wrong
Your AI assistant sounds like a colleague. It remembers your name. It references things you discussed last week. It asks follow-up questions that feel perceptive, even thoughtful. And because it sounds like a colleague, you talk to it the way you’d talk to a colleague.
AI sounds like it remembers, but it retrieves
When you tell a human colleague “let’s pick up where we left off on the Henderson project,” something remarkable happens inside their brain. The name Henderson activates a network of associated memories simultaneously: the client’s face, the budget discussion, the awkward moment in the last meeting, the deadline that got moved, the email chain with legal. All of it arrives at once, without effort, without being asked. Cognitive scientists call this associative memory. It is parallel, contextual, and effortless.
Your AI assistant does something that looks identical from the outside and works completely differently on the inside. When you say “let’s pick up where we left off,” the system runs a keyword search across stored conversation fragments. It finds text that matches. It assembles a plausible response from those fragments. The result sounds coherent (often impressively so), but the retrieval mechanism is serial, not parallel. It searches for pieces. It does not remember a whole.
The distinction matters because it changes what you should say.
Associative memory fills gaps; retrieval memory needs them filled
A colleague who knows the Henderson project can handle ambiguity. You can say “the thing Sarah mentioned about the pricing” and your colleague connects it to the right conversation, the right Sarah, the right pricing discussion, without you specifying any of that. The shared context does the work.
An AI assistant presented with the same sentence has to search for every piece of that context individually. Which Sarah? Which pricing discussion? Which conversation? If your stored history contains multiple Sarahs, multiple pricing discussions, and dozens of conversations, the retrieval system has to guess. Sometimes it guesses well. Often it guesses wrong and presents the wrong answer with perfect confidence (that is a separate problem, which I’ve written about as The Confidence Fallacy).
The practical consequence is simple. With a human colleague, vague input works because shared context resolves the ambiguity. With an AI assistant, vague input fails because there is no shared context to resolve anything. The system has fragments. It needs you to tell it which fragments matter.
The lazy input cycle makes everything worse over time
This is where the fallacy becomes actively destructive rather than merely inconvenient.
When you first start using an AI assistant, you’re careful. You provide context. You explain what you need. You specify constraints. The results are good, sometimes surprisingly good, and you start to trust the system. As trust increases, your inputs get lazier. You type less. You assume more. You stop providing the context that made the early results good in the first place.
The AI hasn’t changed. Your inputs have. And because the system responds to whatever you give it (with full confidence, regardless of whether it understood correctly), you don’t notice the quality dropping. You notice the results getting “worse” and blame the tool. I call this the Compounding Error Cycle: each lazy input produces a slightly wrong output, which you then build on with another lazy input, and the errors accumulate in both directions.
I’ve watched this pattern in hundreds of conversations about AI assistant frustration. The complaint is always the same: “It was brilliant at first and now it’s useless.” The AI didn’t degrade. The user’s inputs did.
How a colleague actually thinks versus how your AI actually retrieves
I want to be precise about the mechanism because precision is what fixes the problem.
When your human colleague hears “Henderson project,” their brain activates a semantic network. Every node connected to Henderson lights up simultaneously (the people involved, the timeline, the deliverables, the politics, the unspoken tensions). This is parallel activation. No search query is required. The connections exist as permanent neural pathways built through shared experience.
When your AI assistant processes “Henderson project,” it runs what is essentially a keyword search. It looks for stored text containing “Henderson” and “project.” It retrieves fragments that match. It ranks them by relevance (using statistical likelihood, not understanding). Then it generates a response that synthesises those fragments into something that reads like informed recall.
The output can be indistinguishable from what a colleague would say. The process that produced it is fundamentally different. And the difference determines what inputs work.
Three input habits that fix the problem immediately
I’ve been working with algorithms since 1998 (first as a musician, then as a children’s content creator, then as a digital marketer). In 2015, at SEO Camp in Metz, France, I coined the phrase “Empathy for the Devil” to describe the principle that has guided everything I’ve built since: understand how the system actually works, help it do its job, and it will help you do yours. The Colleague Fallacy is what happens when you skip the understanding part.
Here are three habits that close the gap between what your AI can do and what your inputs allow it to do.
Disambiguate names and references. Every time you mention a person, a project, or a concept that could refer to more than one thing, add one identifying detail. Not “Sarah’s email” but “Sarah Chen’s email about the Q3 pricing revision.” The extra five words eliminate the retrieval ambiguity that produces wrong answers.
Provide the context you assume is shared. Before asking a question that builds on previous work, state the relevant context in one sentence. Not “can you continue the analysis” but “we analysed Henderson Corp’s renewal pricing last Tuesday and identified three risk factors; can you continue from there.” The AI now knows which analysis, which company, and which findings to retrieve.
State your intent, not just your question. AI retrieval works better when it knows what you’re trying to accomplish, not just what words to search for. Not “what did we say about the timeline” but “I need to update the client on our revised delivery dates for Henderson; what timeline did we agree on.” The intent shapes the retrieval. The right fragments surface because you told the system what “right” means in this context.
The same fallacy applies to how AI treats your brand
I’ve spent the last decade helping brands optimise their representation in AI systems. The pattern is identical at scale.
Businesses assume that AI “just knows” their brand has evolved. They update their website, change their positioning, launch new products, and expect AI assistants to reflect the changes automatically. The AI doesn’t work that way. It retrieves whatever fragments it has stored from your digital footprint, and if your old positioning is more prominent than your new positioning (which it almost always is, because older content has had more time to be indexed and corroborated), the AI confidently presents yesterday’s story to today’s prospects.
That is The Colleague Fallacy applied to your entire business. You assume shared context (“of course AI knows we pivoted to enterprise last year”). The AI has fragments. It needs you to tell it which fragments matter, by making your new positioning clearer, more consistent, and more corroborated than the old one.
I’ve written extensively about this as Knowledge Rot, and about the paired output problem as The Confidence Fallacy. Together, these three concepts explain why most people’s experience of AI degrades over time, whether they’re using a personal assistant or watching AI represent their brand to millions of potential customers.
The input you provide determines the output you receive
I started talking about Empathy for the Devil eleven years ago because I’d spent the previous fifteen years learning (the hard way, through a children’s entertainment website that received 5 million visits a month from 182 countries) that algorithms are not adversaries. They are systems with specific constraints, specific retrieval mechanisms, and specific failure modes. When you understand those constraints, you can work within them. When you don’t, you fight the system and lose.
The Colleague Fallacy is the 2026 version of that same principle applied to the tool most professionals now use every day. Your AI assistant is not your colleague. It does not share your context. It does not remember in the way you remember. It retrieves fragments and assembles plausible responses from whatever you give it.
Give it specificity, disambiguation, and stated intent. The quality difference is immediate, dramatic, and permanent. Not because the AI got better, but because you stopped asking it to do something it was never designed to do.
This article is part of a series on how AI assistants actually work, and what that means for both personal productivity and brand representation. The companion piece, “Knowledge Rot: The Silent Killer of Every AI Assistant You’ll Ever Build,” examines what happens when the knowledge base itself degrades. Together, they explain why AI results get worse over time from both sides: your inputs (The Colleague Fallacy) and the system’s knowledge (Knowledge Rot).