February 12, 2026

Forecasting Mistakes

The Hidden Forecasting Mistakes Training Providers Must Avoid

Forecasting in training can feel stable right up until it doesn’t. Confidence builds quickly, then drops just as fast. This article sets out the seven mistakes that most often sit behind that shift.

Forecasting in training can feel stable right up until it doesn’t. Confidence builds quickly, then drops just as fast. This article sets out the seven mistakes that most often sit behind that shift.

In the previous article, I looked at why forecasting is uniquely difficult in training businesses — and found that much of it comes down to how cohort delivery behaves in real life.

Capacity is fixed. Demand arrives unevenly. Costs stay committed while outcomes remain uncertain. Operational load builds long before it shows up in the numbers. Marketing performance moves in bursts rather than trends, and the signals it sends are often late or misleading.

In that environment, forecasting becomes highly sensitive to short-term movement. A quieter period can introduce doubt and trigger defensive decisions. A strong run can just as quickly build confidence that travels further than the data really supports.

Before long, the financial picture starts to feel less stable. Profit can seem fragile. Costs that once looked covered begin to feel exposed, and margins tighten sooner than expected. All the while, operational pressure continues to build in the background — gradually, and often unnoticed — narrowing the options available for what comes next.

Under those pressures, forecasts tend to fall short in familiar ways.

In this article, I examine seven of the most common forecasting and planning mistakes that emerge from the realities of running a training business — and why they quietly disrupt even the smartest models.

Mistake #1: Confusing high AOVs with healthy margins

Forecasts often start from a place of optimism. A high AOV looks reassuring on the spreadsheet — it’s often the first number entered into the model, and it creates an immediate sense of comfort. The instinctive conclusion is simple: “If the AOV is high, the margin must be healthy.”

That assumption often leads to a CPA target that’s set higher than the course can genuinely sustain.

But AOV rarely reflects true margin in a training business. Higher-priced programmes usually come with heavier delivery demands: more tutor time, more assessment and admin, more learner support, more operational friction, and often slower, more expensive marketing cycles.

And yet, in many forecasts, the only delivery cost entered into the model is the tutor fee.

When everything else goes unmodelled, the course looks far healthier on the spreadsheet than it is in reality.

From there, the model starts to drift. The CPA guardrails widen. Acquisition becomes more expensive. The cohort fills — but profitability erodes underneath.

This is the pattern of the optimistic forecaster: assuming the course has more margin than reality provides, and allowing CPAs to rise into a range the business can’t actually sustain.

Sound forecasting might begin with inputting the AOV — but the real work is translating that number into the delivery load, operational strain, and acquisition pressure sitting behind the course. Only then can you set a CPA range that protects margin rather than dissolves it.

Mistake #2: Setting CPA targets too conservatively

Just as some providers lean toward optimism, others lean toward caution. They anchor themselves to conservative CPA targets — often inherited from previous years, borrowed from other categories, or set without full clarity on the real economics of the course.

The margin might be healthier than expected. The course might genuinely be able to sustain a higher CPA. But because the CPA target feels safe, it becomes fixed — a rule rather than a reflection of the actual economics.

Marketing teams then work within that strict target and do exactly what they’re asked to do: optimise for efficiency, protect the CPA, and hit the number.

But hitting the CPA doesn’t mean hitting the cohort’s potential.

When the allowable CPA is set too low:

  • acquisition volume is unnecessarily constrained

  • viable prospects are left unconverted

  • spend gets pulled back too early

  • and the cohort ends up just above break-even — below its true potential

The frustrating part is that this outcome doesn’t happen because demand wasn’t there. It happens because the model didn’t allow marketing to reach it.

This is the pattern of the cautious forecaster: protecting CPA too tightly, optimising for efficiency over volume, and unintentionally suppressing enrolment on courses that could have filled more strongly — and profitably — with a wider CPA range.

True forecast confidence comes from aligning CPA targets with real margin, not historical comfort.

Mistake #3: Maximising profit but under-scheduling cohorts

Another common forecasting mistake is trying to make every course highly profitable — and in the process, scheduling too few cohorts across the year.

It usually starts with good intent: “Run fewer courses, make each one leaner, tighten the margins.”

But training businesses don’t scale down neatly. Even when delivery volume drops, fixed and semi-variable costs often stay put — and are already higher than in many other industries.

When the schedule is trimmed too tightly:

  • fixed costs are spread across too few cohorts

  • each course has to carry too much overhead

  • annual revenue falls below what the cost base requires

  • and the business becomes overly dependent on a handful of dates performing perfectly

The irony is that this approach often reduces the stability it’s meant to improve. Individual cohorts may look more profitable, but the year as a whole becomes weaker — and far more fragile.

Good forecasting isn’t about squeezing maximum margin from each individual course. It’s about running enough courses to absorb the cost base and create predictable, sustainable revenue.

You don’t win by extracting more from fewer cohorts — you win by scheduling the right number to support the economics of the entire business.

Mistake #4: Expanding the schedule faster than demand can support

Just as some providers run too few courses in pursuit of higher margins, others make the opposite mistake: adding more dates than the business can realistically fill in the hope of creating more revenue. On paper, it looks like growth. In practice, it stretches demand further than it can realistically go — and exposes the business to far more risk than the model accounts for.

It usually starts with one of two overestimates.

The first is overforecasting total demand — assuming the market for a course is deeper or more responsive than it really is.

The second is overforecasting accessible demand — assuming that even if the market is large, the business can capture far more of it right now than its brand recognition, organic traffic, and referral base can genuinely support.

When the schedule expands too quickly on the back of those assumptions:

  • capacity percentages fall as more dates compete for the same pool of learners

  • CPAs rise because acquisition has to work harder to reach incremental delegates

  • each cohort carries greater viability risk

  • and operational workload grows faster than revenue

Sunk costs amplify that risk — venue deposits, logistics, support hours. If a course underperforms, those costs don’t scale down; they often land in full. The more dates added, the more exposure the business carries if even a handful of cohorts fail to fill.

Good forecasting isn’t about adding more dates and trusting the market to catch up. It’s about being honest about how much demand exists, how much of it the business can realistically access right now, and building a schedule that matches that reality.

You don’t win by endlessly expanding the calendar — you win by running the number of courses that both the market and your brand can genuinely support.

Mistake #5: Modelling course capacity but ignoring team capacity

Another common forecasting mistake is treating course capacity as the only real delivery limit — assuming that if the room, tutor, and schedule exist, the business can absorb the work. But the true constraint in most training businesses isn’t seats. It’s the operational load attached to each cohort.

Every course generates hidden hours: onboarding, admin, tutor prep, assessment, learner support, logistics, certificates, compliance. These hours rarely make it into the model — which means they’re not reflected in margin either. The result is a forecast that looks profitable on paper but ignores the capacity cost sitting behind it.

This is why leadership needs three capacity metrics, not one:

  • Course capacity — seats

  • Tutor capacity — delivery resource

  • Operational capacity — the hours required to support each cohort

When only course capacity is modelled, forecasts overestimate how much the business can deliver — and how profitable each course will be.

And interestingly, tutor utilisation is rarely the bottleneck. Tutors often have room to deliver more, while the operations team is already stretched — slowing progress, delaying strategic work, and turning simple tasks into accumulating backlogs.

This mistake creates a forecasting model that assumes scalability that doesn’t exist — and ignores operational costs that should sit inside the margin. Both distort performance expectations, and both are avoidable when all three capacities are surfaced and modelled properly.

Mistake #6: Over-correcting in response to short-term shifts

Some forecasting mistakes don’t come from the numbers at all, but from how quickly the story shifts around them.

High-AOV courses are often treated as investments by learners — almost a luxury — so demand rises and falls with consumer confidence. But when the model isn’t built to absorb that volatility, every movement feels significant. A strong month is taken as a breakthrough. A slower one feels like a trend. Volatility becomes direction long before the underlying pattern is clear.

Seasonal swings make this even harder. January lifts, spring softens, summer slows — but the scale of these shifts changes every year. A quieter January becomes a cause for concern. A strong September gets treated as the new normal. The model reacts to movements that reflect confidence, timing, or sentiment, not a structural change in demand.

Newer categories feel the same pressure. Early cycles are messy: funnels are being refined, creative and copy are still being developed, and learners haven’t yet built familiarity or trust. But those early months often get treated as final verdicts rather than the natural friction of a course finding its footing.

When every fluctuation is taken at face value:

  • budgets move in and out of alignment

  • slow periods prompt unnecessary corrections

  • strong periods inflate expectations

  • categories get labelled before they’ve even settled

Reactivity isn’t the problem. It’s the over-correction that follows when the model can’t tolerate normal volatility — the lurching from one conclusion to the next that pulls the forecast further from reality with every swing.

Mistake #7: Under-reacting to clear performance signals

While some decisions get made too quickly, others don’t get made at all — even when the signals are clear, consistent, and pointing in the right direction. This is the quieter forecasting mistake: the model stays where it is long after the market has moved.

It shows up when sales perform unusually well — faster velocity, lower CPAs, stronger intent — yet budgets remain capped because the original plan still dictates the ceiling. Or when a course repeatedly fills early at full price, but pricing never shifts because the old structure feels safer than adjusting to what the data is showing.

Margin erosion follows the same pattern. Venue costs rise, tutor rates increase, operational hours creep up — but CPA targets and profit expectations remain tied to last year’s economics. The numbers change, the model doesn’t, and pressure builds quietly until margin no longer matches the assumptions.

There are softer versions too. A category starts outperforming its peers but resources stay evenly spread. A channel’s CPA deteriorates month after month but the budget isn’t reallocated. The performance signals are there — they just run in the background, unnoticed or un-acted upon.

When clear signals don’t trigger movement:

  • strong months don’t get leveraged

  • pricing drifts out of sync with demand

  • margin erosion compounds

  • high-performing categories grow slower than they should

For certain providers, the challenge isn’t spotting the change — it’s believing it. Strong signals often run in the background until their bottom-line impact becomes too significant to ignore.

Closing Thoughts

Forecasting in a training business will never be perfectly clean. Cohort delivery isn’t linear. Demand moves, costs shift, CPAs rise and fall, and operational load rarely matches what the spreadsheet predicted. But the goal isn’t perfection — it’s resilience.

Most forecasting mistakes don’t come from a lack of data, but from how that data is interpreted: too much confidence, too much caution, too much reactivity, or not enough. The strongest models are the ones built to accommodate uncertainty rather than fight it.

When margin, capacity, demand, pricing, and acquisition economics are surfaced honestly, the forecast becomes a tool for stability instead of stress — a way to make better decisions earlier, not a set of numbers that needs defending.

In the next article, I’ll move away from mistakes and into structure — outlining the core questions a forecasting model needs to answer if it’s going to hold up inside a training business.