“Vs. previous day/week/month” on dashboards: Worse than useless? (book excerpt)

tl;dr: This excerpt from my upcoming book, Beyond Dashboards, is the third in a seven-part series on how to determine which metrics to visually flag on a dashboard (i.e., with alert dots, different-colored text, etc.) in order to draw attention to metrics that require it. In this post, I look at the “vs. previous period” method of flagging dashboard metrics and why, despite being extremely common, this method for drawing attention to metrics can be worse than useless. In a later post in this series, I’ll introduce a more useful approach called “four-threshold” visual flags.

 

Probably the most common way to visually flag metrics that require attention on a dashboard is the “vs. previous period” method, whereby each current value has a “vs. previous day” (or previous week, or previous month, etc.) flag next to it, usually expressed as a percentage change with an indicator of positive or negative change:

vs. Yesterday Image.png

Vs. previous period visual flags are appealing because they’re easy to implement and they don’t require performance targets or alert thresholds to be set for each metric. They also look useful, i.e., like they indicate which metrics are doing well (green/positive flags) and which metrics require attention (red/negative flags). I’m not the first to point out, however, that this apparent usefulness is illusory and that they have major drawbacks that put their basic usefulness into question:

  • They generate a lot of false alarms. A change of -14% in today’s sales vs. yesterday might mean that we’re getting killed by that new competitor or that our e-commerce site is crashing, i.e., that it’s time to panic. Or, it might mean that yesterday’s sales were unusually high and we’ve simply returned to normal sales levels today, i.e., that everything’s fine. I’ve seen a lot of completely unnecessary panics result from these types of false alarms.

  • They don’t take targets into account. Everyone might be pleased to see that the number of new customers is up 8% this month vs. last month. What no one may realize, however, is that the customer acquisition rate is still well below where it needs to be in order to meet our growth targets, so the current number is actually a problem that needs to be solved, not the good news that the +8% flag on the dashboard would suggest.

  • They don’t reliably draw attention to metrics that require it. A change of -2.1% in employee satisfaction vs. last week may be a minor concern that requires no action, but the same -2.1% change in website uptime would be an all-hands-on-deck crisis, so the “percentage change” values can’t be relied upon to draw attention to the most important problems. Given that that’s one of the main reasons for having visual flags on dashboards in the first place, this is a pretty serious limitation.

  • They produce “Christmas tree syndrome.” Since current values are almost always at least a little higher or lower than the previous period’s values, every metric gets a “vs. previous period” flag beside it, creating a visually overwhelming wall of red and green indicators—even if everything is actually fine. Sometimes, dashboard creators will try to mitigate this by setting “don’t flag” ranges whereby metrics aren’t flagged if the “vs. previous period” value is, say, between -2% and +2%. While this would cut down on the number of flags on a dashboard, it doesn’t address any of the other limitations in this list, and it introduces other, new problems. For example, for some metrics, a change of -1.5% might actually be a big problem.

  • They’re mostly “noise.” Consider the following seven-day sequence of “orders processed”:

seven days day-over-day.png

On any given day in this sequence, what does the “vs. previous day” value on that day actually tell users? Can they see when this metric requires attention, as opposed to when it’s simply experiencing normal day-to-day fluctuations that require no action? Or even whether the metric is generally trending up or down over time? The technical term for this type of non-information is “noise.”

The next two posts in this series will list the drawbacks of the two other types of visual flags that I commonly see on dashboards: Single-threshold flags and Good/Satisfactory/Poor ranges. I’ll then introduce the four-threshold flags that I now recommend since this type of visual flag doesn’t have any of the drawbacks or limitations that I list for the three common types. I'll conclude the series with a post on useful statistics for setting visual flag thresholds automatically.