Forecasts are how budgets get delivered. This module covers the cadences that matter — 14, 30, and 90-day rolling — how to measure forecast accuracy, and how to use forecasts to drive labor and purchasing decisions in real time.
A budget is built once a year and reviewed monthly. A forecast is built weekly and acted on daily. The role each plays is different. The budget tells the property what success looks like for the year. The forecast tells operations what to do this week.
Every forecast in hospitality has a downstream action attached to it. The 14-day forecast drives labor scheduling. The 30-day forecast drives F&B ordering and revenue management activity. The 90-day forecast drives sales pacing, marketing campaigns, and capex timing. A forecast no one acts on is just a number.
The properties that forecast well are the ones where each forecast has a clear owner, a clear cadence, and a clear set of decisions that depend on it. The properties that forecast poorly tend to have one big forecast that no one quite owns and no one quite acts on.
Most properties need three forecast horizons running simultaneously. Each answers a different question and triggers different decisions.
Confirmed bookings plus expected walk-ins, group blocks, and last-minute demand. The labor scheduler runs off this. F&B prep volumes flow from this. Errors here show up immediately in service or cost.
Where revenue management spends most of its time. Rate decisions, restrictions, channel mix adjustments. Ordering for items with longer lead times. The window where pricing actually moves demand.
Compares pace against last year, against budget, and against the comp set. Surfaces the gaps that need group sales activity, marketing campaigns, or pricing moves to close.
The current best estimate for full-year landing versus budget. Communicated to ownership. Triggers expense actions if revenue is softening, or reinvestment opportunities if revenue is over-pacing.
The most consequential question in any forecasting practice is rarely asked: how accurate were our last forecasts? Without measuring accuracy, the team has no idea whether the current forecast deserves any confidence at all.
The standard measure is Mean Absolute Percentage Error (MAPE): average of the absolute % error of each forecast versus actuals. A 14-day forecast for room nights should track at 3–5% MAPE; anything above 8% is unreliable. A 30-day forecast at 6–8% MAPE is good; above 12% is poor. A 90-day forecast at 10–15% MAPE is the standard.
The discipline isn't just measuring MAPE. It's investigating systematic bias. If your forecasts are consistently 4% high or consistently low on weekends, that's a fixable bias hiding inside an acceptable MAPE. The pattern of errors is more diagnostic than the magnitude.
The annual budget is a promise; the reforecast is the property's honest current view. In a stable year, the reforecast and budget converge. In a volatile year — a market disruption, a new competitor, a soft economy — they diverge, and the gap is the operations team's lived reality.
The discipline of reforecasting well: do it on a schedule, not in a panic. Properties that reforecast monthly catch softness early enough to do something about it. Properties that reforecast quarterly catch softness too late to take meaningful action. Properties that reforecast "when it's needed" never actually reforecast until ownership demands it — by which point the budget is so far off it's useless.
The mechanics: take year-to-date actuals, apply current forecasts for the remaining months, sum to a full-year landing. Compare against budget. Communicate the variance with a story, not just a number. The story is what makes reforecasts useful to ownership.
A blended occupancy forecast is useless for revenue management. Forecast by segment, or the forecast can't drive decisions.
If you don't know your MAPE, you don't know whether to trust the current forecast. Track MAPE monthly for each cadence.
By then it's too late to act. Reforecast on schedule, monthly.
Forecast inputs come from sales (group pace), marketing (campaign impact), operations (group block confirmations). It's a team output, owned by revenue management.
A softening revenue forecast that doesn't trigger a labor or expense response is just bad news. The forecast must include the action plan, or it isn't operationally useful.
This exercise is about the practice itself, not the math. Pull the last three months of forecasts and their corresponding actuals.
Forecasting practice varies enormously between properties. Comparing approaches usually reveals practical improvements.
If you don't measure it, that itself is a finding worth discussing.
Unowned forecasts produce no actions. Owned ones do.
If the answer is "hope," that's the conversation. Hope is not a plan.