When the Playbook Stops Being Enough
AI is coming for routine expertise. Here's what it can't replace.
Every playbook works—until it doesn’t.
In 2002, researchers studying expertise asked a question that now feels urgent for sales leaders: what if producing competent professionals wasn’t enough?
They had spent decades celebrating efficiency — speed, accuracy, the ability to move through complex problems with increasing automaticity. But a quieter strand of scholarship was arguing for a second dimension. They called it adaptive expertise. Not just the ability to execute when things look familiar — but to innovate when they don’t.
That distinction has lived in academic literature for twenty years. It deserves a seat at the sales leadership table now.
Most sales organizations are built — deliberately or not — to produce routine experts. People who can execute structured discovery, maintain CRM hygiene, map power and pain consistently, and forecast with discipline. These are real capabilities. They create reliability at scale.
But routine expertise is, at its core, pattern recognition.
And pattern recognition is precisely what large language models do extraordinarily well.
AI will draft your follow-up emails. It will suggest objection handling, summarize call notes, generate account research, and produce discovery questions calibrated to the buyer’s industry — faster than your best rep and improving weekly. If your sales advantage rests primarily on faster email writing, cleaner qualification checklists, or tighter talk tracks, you are competing in a space machines are actively absorbing.
This is not a crisis. It’s a clarification. It forces the question that efficiency-obsessed enablement programs tend to avoid: what does a human salesperson do that AI cannot?
The adaptive expertise framework offers an answer: novel problem solving. The capacity to recognize when the pattern has broken — and think your way into new territory rather than applying the old map to unfamiliar terrain.
Consider what the last several years actually looked like. Funding models shifting mid-cycle. Procurement rules changing without notice. New stakeholders entering conversations with entirely different metrics than the ones you built your deck around. The rep who had mastered every CRM stage was suddenly in a room where the script didn’t fit and the objection wasn’t familiar.
Routine expertise asks, What stage is this?
Adaptive expertise asks, What kind of problem is this?
That’s a philosophical difference before it’s a tactical one.
The academic framework maps this as two dimensions in tension — efficiency and innovation. Too much efficiency without innovation produces the rigid routine expert: highly reliable in stable markets, genuinely brittle in disruption. Too much innovation without efficiency produces the perpetual improviser — charismatic, unreliable, a forecasting nightmare. The goal is both. Structure in execution, creativity in framing. Discipline in process, flexibility in strategy.
Most enablement programs are designed to answer one question: how do we reduce errors? Adaptive expertise asks a different one — how do we improve our response to problems we’ve never seen before? These are not the same question, and optimizing for one doesn’t develop the other. Objection scripts and stage criteria produce routine experts. Necessary. Incomplete.
Developing adaptive capacity requires something more uncomfortable: coaching conversations that resist resolution, scenario work where the correct answer is genuinely unclear, post-mortems that ask not just where the process broke down — but where the process stopped being sufficient. That last question is one most sales organizations never ask. It implies the framework has limits, and admitting that makes leadership nervous.
Here is the most interesting use of AI in sales: not automation, but augmentation. Routine expertise plus AI becomes table stakes. Adaptive expertise plus AI becomes leverage. AI accelerates preparation. Human judgment reframes the problem. The sales leader’s job isn’t to protect reps from AI tools — it’s to train them to think beyond what AI suggests. To treat the output as a starting point for synthesis, not a substitute for judgment.
There’s a diagnostic question worth sitting with: if your top performers left tomorrow, would their success be replicable through your playbooks alone?
If yes, you’ve built strong routine systems.
If no — the follow-up matters. Is it charisma? Or is it adaptive judgment — the ability to recognize when the playbook has stopped being sufficient and build something new in real time? High-performing reps often excel not because they follow the framework better, but because they know when to leave it. That departure is rarely documented. Rarely taught. Rarely measured.
It may be the most valuable capability in your organization.
Efficiency builds reliability. Innovation builds relevance.
For a long time, reliability was enough. Stable markets reward disciplined execution. Predictable buyers reward clean frameworks. In that environment, routine expertise scales.
But markets move. Stakeholders multiply. Incentives shift. AI absorbs more of the predictable work. And when predictability shrinks, rigidity gets exposed.
The rigid routine expert doesn’t fail because they lack skill. They fail because their skill was optimized for yesterday’s terrain.
Adaptive expertise isn’t a rejection of process. It’s process plus discernment — knowing the framework deeply enough to recognize its limits, and having the willingness to stand in ambiguity without reaching for the nearest script.
AI will continue to compress the value of pattern recognition. That compression is not temporary.
What will not compress is judgment. The ability to sense when the room has changed. The instinct to reframe the problem. The courage to depart from the playbook when the playbook no longer fits.
Sales organizations that treat efficiency as the finish line will look impressive — until the environment moves.
The ones that treat efficiency as the foundation and adaptive capacity as the aim will look messier. Harder to quantify. Slower on certain dashboards.
They’ll also be the ones still standing when the patterns break.



