Digital Marketing » Articles » Articles By » Tactical Intelligence » Getting AI to Play Nicely With You

Getting AI to Play Nicely With You

Strategy Sandbox | February 2026 Precedence Stake: The “Not This → That” framework for effective AI instruction.


Why AI Ignores Your Instructions

You tell the AI: “Write in a professional but approachable tone.”

It gives you corporate mush. Or sycophantic fluff. Or something that sounds like a LinkedIn influencer having a breakdown.

You try again: “Be more concise.”

It cuts words but keeps the same dead voice.

The problem isn’t the AI. It’s how you’re teaching it.

The Gap Between Aspiration and Execution

When you describe what you WANT, you’re giving the AI an aspiration. A vague target. A direction without coordinates.

The AI interprets your aspiration through its training data - millions of documents where “professional but approachable” meant wildly different things. It picks an interpretation. Usually wrong.

Aspirations are ambiguous. Examples are not.

The Gold Standard: Not This → That

The technique that dramatically improves AI performance:

Show what you DON’T want (with a specific example). Then show what you DO want (gold standard example).

The AI doesn’t have to guess. It sees the boundary AND the target. It learns the delta between them.


Example: Tone instruction

❌ Weak prompt: “Write in a warm, professional tone.”

✅ Strong prompt: “Not this: ‘We are pleased to inform you that your request has been processed and we look forward to serving you in the future.’

But this: ‘Done! Your request is processed. Let us know if you need anything else.’

Match the second tone.”


Example: Structure instruction

❌ Weak prompt: “Make it scannable.”

✅ Strong prompt: “Not this: Dense paragraphs with no breaks, forcing readers to hunt for key points buried in text walls.

But this: Lead with the insight. One idea per paragraph. Bold the takeaway if there is one.

Structure it like the second example.”


Example: Voice instruction

❌ Weak prompt: “Sound like me.”

✅ Strong prompt: “Not this: ‘In today’s rapidly evolving digital landscape, it’s crucial to leverage innovative solutions…’

But this: ‘Google changed the rules again. Here’s what actually matters.’

Write like the second. Direct. No throat-clearing. No buzzwords.”


Why This Works

The AI’s training optimized it for pattern recognition and correction. It learned from millions of instances where humans said “not that, do this instead.”

When you provide both the negative and positive example, you’re speaking its native language. You’re giving it:

  1. A boundary (what to avoid)
  2. A target (what to hit)
  3. Implicit reasoning (the contrast teaches the principle)

The AI doesn’t just follow the instruction. It learns the pattern and applies it throughout.

The Compound Effect

Once you’ve established “not this → that” for a specific behavior, the AI holds it.

You don’t need to repeat the full instruction. A short reminder works: “Remember - second style, not first.”

Over a conversation (or a project), you build a shared understanding. The AI’s performance compounds because you’ve given it clear boundaries instead of fuzzy aspirations.

The Meta-Lesson

This technique reveals something deeper about working with AI:

The AI isn’t bad at following instructions. We’re bad at giving them.

We communicate in abstractions because humans fill in gaps. AI doesn’t fill gaps - it guesses. And its guesses come from a training set you didn’t curate.

When you show “not this → that,” you stop asking the AI to read your mind. You show it your mind instead.

The Practical Framework

For any AI instruction:

  1. Identify the behavior you want
  2. Find or create a BAD example (what you don’t want)
  3. Find or create a GOLD example (what you do want)
  4. Present both with clear labels
  5. Ask the AI to match the gold, avoid the bad

This works for tone, structure, vocabulary, length, reasoning style, formatting - anything.

Key Claim: The difference between AI that frustrates you and AI that feels like an extension of your brain is not the model. It’s whether you taught it with aspirations or examples.

Stop telling AI what you want. Start showing it what you don’t want AND what you do want.

The contrast is the lesson.

Similar Posts