Key Indicators: Monitoring and Assessing Forecasting Performance

Understand core forecasting KPIs like MAPE, Bias, and FVA to evaluate accuracy and drive better planning decisions.

Forecasting Performance

Forecasting is one of the most critical — and complex — elements of supply chain management.

To evaluate how well a company predicts future demand, organizations rely on two essential indicators:

  • 🎯 Forecast Accuracy (FA)
  • ⚖️ Forecast Bias

Together, these metrics allow companies to measure the quality of their forecasts, identify systematic errors, and guide continuous improvement in demand planning.


🎯 What is Forecast Accuracy (FA)?

Forecast Accuracy measures how close your forecasted demand is to actual demand for a given period and product.

It reflects how well your organization understands when, how much, and what mix of products customers will purchase.

FA is often calculated at the SKU level, and can then be aggregated to category, brand, or total business.

While it’s a backward-looking metric, monitoring Forecast Accuracy over time helps detect trends and anticipate risks or opportunities in demand behavior.

⚙️ How Forecast Accuracy Is Calculated

The formula for Forecast Accuracy is:

Notion image

Example:

Notion image
SKU
Forecast (Lag X)
Actual Sales
Absolute Variance
Forecast Accuracy
A
100
75
25
(1 - (25/75)) × 100 = 67%
B
450
1,000
550
(1 - (550/1,000)) × 100 = 45%
C
230
210
20
(1 - (20/210)) × 100 = 90%
D
1,000
800
200
(1 - (200/800)) × 100 = 75%
E
350
195
155
(1 - (155/195)) × 100 = 21%

Insight:

At the SKU level, results vary widely — SKU C shows strong accuracy (90%), while SKU E performs poorly (21%).

This variation highlights product mix and timing issues that can be hidden when aggregating data.

📦 Aggregation Effects

When rolling up FA results to higher levels (Category, Brand, or Business Unit), over-forecasts and under-forecasts cannot cancel each other out — they must be summed as absolute variances.

Notion image
Level
Forecast (Lag X)
Actual Sales
Absolute Variance
Forecast Accuracy
Category X (A + B + C)
780
1,285
595
(1 - (595/1,285)) × 100 = 54%
Category Y (D + E)
1,350
995
355
(1 - (355/995)) × 100 = 64%
Brand Z (All SKUs)
2,130
2,280
950
(1 - (950/2,280)) × 100 = 58%

Interpretation:

Aggregated FA values (53.7%, 64.3%, 58.3%) are lower — and more realistic — than the simple average of individual SKUs.

Summing absolute variances preserves information aboutproduct mix and directional errors across SKUs.

If aggregation ignores absolute variances, FA may appear artificially high, masking product mix issues or timing mismatches in demand.

⚖️ What is Forecast Bias?

While Forecast Accuracy tells you how close your forecast is to reality,

Forecast Bias tells you in which direction you tend to be wrong — whether you over-forecast or under-forecast.

  • 📈 Positive Bias → Forecasts are consistently higher than actual sales → Over-forecasting
  • 📉 Negative Bias → Forecasts are consistently lower than actual sales → Under-forecasting
Unlike Forecast Accuracy, Bias does not use absolute variance, so positive and negative deviations offset each other at higher aggregation levels.

⚙️ How Forecast Bias Is Calculated

The formula for Forecast Bias is:

Notion image

Example:

Notion image

💡Why FA and Bias Matter

Both metrics provide complementary insights:

Metric
Tells You…
Typical Use
🎯 Forecast Accuracy
How well forecasts match actual demand
Measure planning precision
⚖️ Forecast Bias
Whether forecasts systematically over/under estimate demand
Detect planning tendencies and systematic errors

Monitoring both together helps organizations understand what is happening (FA) and why it’s happening (Bias).


🔍 Diagnosing Forecasting Issues

The table below illustrates common root causes and business impacts based on forecast performance patterns:

Metric Result
Potential Root Causes
Potential Business Impacts
Low Forecast Accuracy (Forecast < Actual)
Wrong SKU mix, missed promotions, demand timing errors, unanticipated market changes, negative bias
Stockouts, customer fines, lost sales, expedited production and freight costs
Low Forecast Accuracy (Forecast > Actual)
Overestimated promotions, market share loss, positive bias, competitor activity
Excess inventory, obsolescence, holding costs, production inefficiency
Highly Variable Forecast Accuracy
Unstable demand, inconsistent inputs, missing market signals
Service level failure, cost volatility, inventory swings
Positive Bias (Over-forecasting)
Over-optimism on launches or promos, inflating forecasts to protect service
High inventory carrying cost, wasted resources
Negative Bias (Under-forecasting)
Underestimating customer expansion, competitor exits, rapid market growth
Stockouts, service failures, reactive operations

📊 How to Use These Metrics

A Forecasting Performance Dashboard can visualize these indicators over time and across hierarchy levels.

This allows teams to:

  • Identify trends and anomalies in forecast performance
  • Conduct root cause analysis by product, customer, or region
  • Collaborate with Sales, Marketing, and Operations to adjust plans
  • Implement improvement initiatives and track impact
Continuous monitoring of Forecast Accuracy and Forecast Bias drives a culture of data-driven decision-making and forecast ownership across the organization.

✍️Key Takeaways

Forecast Accuracy measures precision.

Forecast Bias measures directionality of errors.

✅ Both are needed to truly understand and improve forecast performance.

✅ Tracking and analyzing them over time reveals opportunities to reduce costs, improve service, and align business functions.

Did this answer your question?
😞
😐
🤩