Robust measurement of percentage errors metrics

Oren Razon
September 24, 2020

👉🏽 Tip: Make sure to normalize relative (Percentage) errors metrics to avoid granting too much importance to deviations from small prediction errors! 


A common measure for “regression” or “probability estimation” tasks in ML is the percentage error of the predictions. Either as Mean Percentage Error (MPE)/Mean Abs Percentage Error (MAPE)/ sMAPE, and more..

However, such relative measures as “percentage error” can cause acute % differences in very small residuals, which are usually less significant to measure or to draw conclusions for the performance. 

Let’s take the following example:


The relative error is highly impacted from case #4, where the percentage error is really high, but the actual absolute error is very low - meaning that it should rather be overlooked to measure performance. 

There are many methods known to handle such cases: from bounding the maximal percentage error, to ignoring errors under a certain threshold, or to using Median percentage error instead of Mean.

We recommend using a normalization factor (see suggestion below) where we can express in the “a” parameter the sensitivity level to be normalized and scale down big relative errors in small numbers.


We hope you find our tip useful!

Make sure you register here to receive our Weekly Tips straight to your inbox!

And please share with us on social media what you would want to hear about!







Recommended Readings: