Video: Why dashboards for status monitoring AND performance monitoring don’t work

This video and post are based on a chapter from a new book on which I’m working. In the comments, please shoot as many holes in it as possible before it ends up in a printing press.

A text version of the video can be found below the video.

 
 

I’ve seen a lot of dashboards that failed to meet users’ and organizations’ expectations. There are a variety of reasons why this happens and, in this post, I want to focus on one of the most common ones, which is that the people who designed the dashboard didn’t fully understand the distinction between status monitoring and performance monitoring. When that happens, the dashboards that they end up designing don’t fulfill either of these needs well.

Before listing the differences between status monitoring and performance monitoring and why those differences matter, though, I should probably clarify how I’m using these terms, since not everyone uses them in the same way. I find it useful to think about both of these as “modes” that users can be in. When users feel that they need data, they can be in either “status monitoring mode” or “performance monitoring mode” (or in one of several other user modes that I’ll discuss in other blog posts). Users regularly flip between these modes depending on what’s going on with their work.

Status monitoring mode

When a user is in “status monitoring mode,” they want the answer to one, single question:

  • “Is there anything that I need to react to this minute (or this hour, day, week or month, depending on how often metrics are refreshed) and, if so, what?”

When a user is in “status monitoring mode,” the only thing that they’re trying to figure out is if there’s anything new that they need to deal with right now and, if there isn’t, they’ll move on to something else. In this mode, users aren’t trying to initiate new projects, alter their priorities, plan for the future or do anything else that could be described as proactive. They’re only interested knowing if there’s anything that they need to react to. A valid analogy for status monitoring mode would be that of a person driving a car down a highway. While doing this, the driver only needs to know if everything is O.K. or if something requires their immediate attention (running out of gas, going too fast, engine overheating, etc.). Some might also call this “tactical mode,” or “operational mode.”

Performance monitoring mode

When a user is in “performance monitoring mode,” on the other hand, they want the answers to a very different set of questions:

  • “What is the overall health of our organization (team, department, company, etc.)?”
  • “Are we doing better or worse than before?”
  • “Are we achieving our strategic goals?”
  • “Do we need to update our strategic goals or targets?”

In this mode, they want to assess how things are going to determine if new projects need to be initiated, priorities need to be rearranged, resources need to be reallocated, etc. Typically, users are only in performance monitoring mode in review or planning sessions, i.e., when they’re assessing how successful they’ve been, making decisions about the future, or both. Performance monitoring mode is much more proactive than status monitoring mode. In our driving analogy, it would be equivalent to the times when the driver pulls over to check on trip progress, decide if they need to take another route, change their destination, etc. Some might call this being in “strategic mode” or “planning mode.”

Neither of these user modes is better or worse than the other; both are important and necessary, and users will flip between them depending on what’s going on with their work. Based on these definitions, though, some readers may argue that an information display for status monitoring should be the same as one for performance monitoring, just with more disaggregated (i.e., more granular) versions of the same metrics needed for performance monitoring. I think that the differences between status monitoring and performance monitoring go far beyond that, though, and I’ve summarized those difference in the table below. After reviewing this table, it may be clearer why those who try to create a single dashboard to support both of these needs often end up with a display that doesn’t meet either need well (but pipe up in the comments if you’re still not on board…).

Differences between status monitoring displays and performance monitoring displays

 Status monitoring displaysPerformance monitoring displays
Target user(s)
The user or set of users for whom the display should be designed
Role-specific
Different roles (i.e., sets of employees with the same job description) need to respond to different problems that may arise in the organization, so each role needs to see a specific set of information presented in a specific way in order to easily and reliably spot those problems. One status monitoring display is, therefore, needed for each role.

If we try to design a single status monitoring display for use by multiple roles (e.g., a dashboard for use by the entire executive team), it will contain a large number of metrics that aren't relevant to each role, which will impair the display's ability to quickly and effectively answer the basic status monitoring question.

Organization-specific
One of the common goals of performance monitoring is to align everyone in an organization (a team, department, agency, etc.) so that they're all working toward the same strategic objectives and have the same definition of success. This means that a single performance monitoring display can and should be used by the entire organization (i.e., multiple roles).
Target roles
The types of roles that will use each type of display
Operational roles only
Status monitoring displays are needed by roles that include responsibility for the day-to-day operations of the organization.

Status monitoring displays are too detailed and generally not well-suited for purely planning or strategic roles such as board members, advisors, etc.

Operational and non-operational roles
Performance monitoring displays are also used by those with day-to-day responsibilities to monitor the performance of their group (team, department, company, etc.), however, they'll use them much less frequently than status monitoring displays (see "Review frequency" below).

In addition, however, performance monitoring displays are useful to strategic roles such as board members, strategic advisors, investors and other non-operational roles that set high-level goals and make strategic decisions.

Review frequency
How often users will need the information on each type of display
Frequent
Because users don't know when problems are going to occur, they need to know the answer to the basic status monitoring question ("Is everything OK?") continuously. Depending on how often metrics are refreshed, this may be every minute, hour, day, week, or month.
Infrequent
Performance monitoring can't and shouldn't be done every minute, hour, day, or week, and possibly shouldn't even be done monthly. Answers to questions such as, "How are we tracking toward our strategic goals?" and "Is our organization improving or getting worse?" are usually only needed during planning or review meetings, and other monthly, quarterly or even annual events.
Number of metrics that need to be monitored Many
In a modern organization that runs software to monitor many aspects of the organization, users are now expected to notice and respond to a very wide variety of problems, which usually means that hundreds, thousands, or hundreds of thousands of metrics could require immediate action.

For example, a Vice-president of Retail Sales for a chain of stores may need to know if any of a dozen metrics go south for the chain's 100 largest stores, which means monitoring 1,200 metrics (or, in this case, instances of metrics), in addition to the many other metrics that could require her attention.

Few
I agree with the performance measurement experts who assert that the overall performance of an organization can be captured in a few dozen carefully chosen metrics, so that's all that are needed for performance monitoring.

Showing thousands of metrics to users who are in performance monitoring mode makes it much harder for them to assess overall performance.

Metric selection criteria
The criteria used to determine if a given metric belongs on a display

Personally actionable
When deciding if a given metric belongs on a status monitoring display, the only criterion that matters is, "Could this metric indicate a problem or opportunity that someone in the target role would need to personally act on right away?"

It's not necessary to explicitly connect status monitoring metrics to higher-level goals, although those goals can and should influence which metrics are included on the status monitoring displays for each role.

Meaningfully indicative of group performance
The main criteria that should be used to determine if a given metric belongs on a performance monitoring display are, "Is this metric a meaningful indicator of the overall performance of the group?" and "Is this metric a meaningful indicator of progress toward the group's strategic goals?"
Effectiveness evaluation criteria
The criteria that are used to evaluate how well or poorly a given display is serving users and the organization
Status monitoring question
When evaluating the effectiveness of a status monitoring display, the main criterion is how quickly and accurately it enables users to answer the basic status monitoring question. Considerations such as the thoughtfulness of the display's layout or its use of color are still important, but only to the extent that they make it quicker and easier to answer the basic status monitoring question accurately.
Performance monitoring questions
The effectiveness of a performance monitoring display should be evaluated based on how well it answers the various performance monitoring questions listed above, which is very a different set of questions than the basic status monitoring question.

Hopefully, this table makes it clear why trying to support these two very different needs with a single display forces many painful design compromises and results in a display that doesn’t meet either need well. Trying to design a display that does both is like trying to design a car dashboard that provides information that users need while driving AND while reviewing trip progress, planning routes and choosing destinations. Unfortunately, this type of all-in-one, accident-waiting-to-happen display is exactly what I see in many organizations.

So, the next time that you’re asked for a dashboard, try to figure out what data-related need prompted users to ask for it in the first place. Are they asking for answers to the status monitoring question, the performance monitoring questions, or both? If both, consider making your and your users’ lives easier by creating two displays, instead of trying to meet both needs in one. If it sounds like some other need entirely, such as looking up specific values or diagnosing problems, stay tuned for future blog posts in which I’ll discuss other types of displays to address those needs, as well.