The Forecast Is Not a Mood Ring
Why “highly likely” is the most dangerous phrase on the call
It’s the last Friday of the quarter. The forecast call starts at 8 a.m. A rep walks the team through a deal in commit — has been in commit for three weeks. The champion is the assistant superintendent of curriculum. The demo landed well. The pricing conversation landed well. The district, the rep says, wants this done before summer. Procurement is “working on it.”
The deal doesn’t close that week. It doesn’t close the next week. It doesn’t close that quarter. By the time it finally lands, two quarters have passed, the contract value has changed twice, and a competitor has appeared on the evaluation.
The rep wasn’t wrong about the deal being real. The deal was real. The mistake, in hindsight, wasn’t about belief. It was that nobody on that call asked whether the evidence in hand was the evidence we’d expect to see if a deal in commit for the quarter were actually closing in the quarter.
This is the forecast problem in miniature, and almost no one names what actually went wrong. The rep wasn’t being dishonest. The manager wasn’t being sloppy. The forecast collapsed because the team had confused two ideas that sound like the same thing and aren’t: probability and likelihood.
Probability Points Forward. Likelihood Points Backward.
In everyday sales conversation, we use them as synonyms. “What’s the probability this closes?” “How likely is it?” “What are the odds?” Same question, three phrasings. But the distinction is load-bearing, and most pipeline reviews lose the quarter because they never make it.
Probability points forward. Given what we believe about this deal, what outcome should we expect? Likelihood points backward. Given the outcome we’re claiming, how well does the evidence we actually have support that belief? When a rep says “this is a 90% deal,” they’re making a probability claim. The manager’s job is to test the likelihood — to ask whether the evidence in hand is the evidence we’d expect to see if a 90% deal were really what we had.
That second question is the one almost no one asks. It’s also the one that would catch most of the deals that slip.
What a 90% Deal Should Look Like
Consider what should be true if a deal is genuinely high-confidence in K12. The economic buyer — not the champion, not the influencer, the budget owner — has confirmed funding. The approval path is documented at every step: cabinet review, superintendent sign-off, board agenda placement, vendor packet through procurement. A specific date sits beside each step, and those dates fit inside the forecast window. The reason to act now is something stronger than “they want to move fast.” It’s a fiscal year boundary, an RFP deadline, a federal funding window, a board cycle that won’t repeat for another month. Risks are named in the deal record, not hidden behind the rep’s tone of voice. A 90% deal should have those conditions in place. If it doesn’t, the deal may still close. The evidence does not support the probability claim being made.
Positive Signals Are Cheap. Negative Ones Are Honest.
Sales teams are systematically generous with positive signals and stingy with negative ones, and K12 is where the asymmetry does the most damage. A curriculum director who says “we love this” feels like momentum, but curriculum doesn’t sign contracts. A request for pricing feels like buying intent, but in district selling, pricing requests are as often comparison shopping for an incumbent renewal or evidence-gathering for a budget fight that’s already been lost. Implementation questions feel like urgency, but they can just as easily be the buyer trying to understand the burden so they can build a case against you. A timeline of “before summer” feels concrete, but unless it maps to a specific board meeting or fiscal event, it’s a preference with no operational force behind it. None of these signals are noise. They’re worth noticing. The mistake is treating them as proof of what they only suggest.
Stage Isn’t Evidence
This is where stage-based forecasting falls apart. Most CRMs assign default probabilities by stage — discovery 20%, proposal 50%, negotiation 80%, verbal 90% — and most teams treat those numbers as if they meant something. They don’t. A deal doesn’t become 80% likely because it entered negotiation. It becomes more probable only if buyer behavior matches what you’d actually expect to see at that stage. The CRM moved to “proposal” because the rep sent one. But did the buyer ask for the proposal because they intend to buy, or because procurement requires three quotes? Did negotiation start because legal is finalizing terms, or because the district is using your contract as leverage against the incumbent? The stage can be accurate while the probability is wildly wrong. Stage tells you what activity has happened. Probability requires evidence that the activity meant something.
Every Forecast Category Needs a Standard of Proof
A forecast is a claim about future revenue, and like any claim, it can be tested against the evidence available. Every forecast category should carry a standard of proof. Commit shouldn’t mean the rep feels strongly. It should mean current evidence supports a high-confidence close inside the period: economic buyer engaged, buying path visible, timing real, contract moving, risks accounted for. Best case shouldn’t mean “maybe if things break right.” It should mean a plausible path exists but one or more critical conditions remain unproven. Pipeline shouldn’t mean “we had a nice meeting.” It should mean there’s active interest, but the evidence isn’t yet strong enough to support forecast confidence. The labels matter less than the discipline behind them. What matters is whether each category answers the same two questions: what must be true for this deal to close when we say it will, and what evidence do we have that those things are actually true.
A Good Deal Is Not a Forecastable Deal
This distinction also separates two species of deal that get lumped together too often. A good deal — strong fit, real pain, engaged champion, strategic account — is not the same as a forecastable deal. A forecastable deal has something more: observable progress through a known decision path. In K12 especially, the gap between attractive and forecastable is where quarters go to die. A superintendent’s support is genuinely valuable and does not, by itself, get a contract through cabinet. A CIO’s architectural blessing is real and does not move the procurement clock. A CFO’s nod is meaningful and does not constitute budget protection until the funding source is named and confirmed. Every district layer changes the probability. Likelihood asks whether the evidence supports the stage, the close date, and the forecast category. The deal can be real and the timing claim still unsupported.
The One Question That Costs Nothing
The single most useful question in a forecast call costs nothing and reframes everything. When a rep says “this is highly likely,” the wrong response is to argue the percentage. The right response is: what evidence would we expect to see if that were true? Then compare the expected evidence to what’s actually in hand.
The point of that question isn’t to interrogate the rep. It’s to separate confidence from evidence — which is harder than it sounds, because reps and managers both want the deal to be real, and wanting a thing tends to feel like knowing it. Likelihood thinking puts a small, useful distance between belief and evidence. It also makes the manager’s pushback land differently. “I don’t believe you” is a challenge to the rep’s competence. “I don’t yet see the evidence that would support that forecast” is a challenge to the deal. One starts an argument. The other starts an inspection.
Evidence-Seeking Beats Pessimistic
The best forecast cultures aren’t pessimistic. They’re evidence-seeking. Pessimistic teams distrust every deal, which slows momentum and breeds defensiveness. Evidence-seeking teams want to believe and require support for belief. They train reps to distinguish three things that most forecasts collapse into one: possible, probable, and proven. Possible means the deal could close. Probable means current evidence suggests it’s more likely than not. Proven means the conditions required for close are confirmed and the remaining risk is bounded. Most forecast pain comes from reporting possible deals as probable, and probable deals as proven.
The Manager’s Real Job
The manager’s real job, then, is not to assign a lower number to every deal. It’s to improve the quality of the evidence. Sometimes that means moving the forecast down. More often it means helping the rep take the action that would make the original forecast credible — getting the executive meeting, mapping the procurement path, confirming the funding source, identifying who else must be engaged, finding the business event that makes delay actually costly. Forecast management becomes sales leadership the moment the goal stops being prediction and starts being influence.
The next time a rep tells you a deal is highly likely, don’t argue the number. Ask what evidence you’d expect to see if that were true. Then notice what’s missing.
Because the most dangerous deal in any forecast is not the one with bad news. It’s the one with a good story and weak evidence.



