New book coming in late 2018:
Beyond Dashboards

(Feedback needed!)
 

Source: Dreamstime

Source: Dreamstime

I’ve seen a lot of dashboards fail

When I’m brought in by organizations as a consultant to design dashboards for their decision-makers, it’s often because things aren’t going well. There’s already a dashboard in place but the people who were supposed to benefit from it don’t like it and may even have stopped using it altogether. The organization certainly isn’t benefiting from the dashboard in the way that everyone thought it would. When asked why they don’t like or use the dashboard, users offer only frustratingly vague answers such as, “I just don’t trust it”, “It takes too long to find what I need/is too hard to use.”, or, “It’s missing information that I need.” When pressed for specifics, the answers often don’t get much more useful than the original, vague complaints.

In these situations, those who were responsible for developing the dashboard tend to assume that the cause of the dashboard’s poor reception must lie in its visual design; the colors are wrong, the layout isn’t optimal, the choice of graphs isn’t right, or it’s just too ugly and, indeed, in my experience, poor visual design is usually a contributor to dashboard failure. Through my work designing dashboards for a wide variety of organizations, though, I’ve come to believe that poor visual design isn’t the only cause of dashboard failure, or possibly even the main one.

What else could be causing so many dashboards to fail?

For now, I’m calling it poor “information organization”. To be clear, I’m not talking about how information (i.e., metrics and other organizational data) is organized on a dashboard, but, instead, about which information is displayed on which of the various types of displays that are available to decision-makers, including dashboards, hard and soft copy reports, interactive search and analysis tools and the like (all of which I refer to collectively as “information displays”). Basically, the problem is that the right information and interactive features are on the wrong information displays, and this reduces the effectiveness of all of those information displays --including dashboards.

Why would poor information organization be a cause of dashboard failure?

Because interactive features such as filtering, searching, and drilling down, as well as certain types of information that belong on other types of displays, often end up on dashboards. Putting “non-dashboard” information and interactive features on a dashboard isn’t just a bad practice, it can drastically reduce the effectiveness and usability of the dashboard. Why? Because non-dashboard information and interactive features get in the way of answering the single most common, fundamental question that users need a dashboard to answer: “Is there anything that requires my attention today (or this week or this month)?” If users need to click through seven product lines, four business units, twelve date ranges, and five regions (each of which has ten sub-regions to drill into) just to see if everything O.K., they're not going to do that for very long.

Those who created the dashboard may even realize that at least some of the information or interactive features on their dashboard really shouldn’t be there (although they may not entirely understand why not). What they also know, though, is that their users legitimately do need that information and those interactive search and analysis features in order to do their jobs, so where else should those things go, if not on the dashboard?

New book: Beyond Dashboards

A new book on which I’m working, tentatively titled Beyond Dashboards, intends to answer the “where else should it go?” question by proposing a more useful way to organize an organization’s information onto different types of displays for its decision-makers, thereby eliminating many problems with all of those types of displays --including dashboards. What the book proposes is quite different than the way that many organizations currently organize their information onto displays since they tend to do so by data source (e.g., a display for information from the CRM system, a display for information from the ERP system, etc.), or by organizational group (e.g., a display for information needed by the marketing department, a display information needed by the EMEA division, etc.). Some, more sophisticated, organizations organize information onto displays by role (a regional manager display, a CEO display, a call center agent display, etc.) which is certainly a step in the right direction, but I’ve learned that a few additional types of organization are needed in order to truly solve the user traction and dissatisfaction problems with dashboards, as well as with other types of information displays.

Organizing information by "data need"

The new book lays out a framework for organizing information onto displays based not just on user roles, but also on the various types of “data needs” that users can have. Over the years, I’ve seen that people working within organizations have a surprisingly small number of types of high-level data needs:

  • Status monitoring
  • Problem diagnosis
  • Performance monitoring
  • Lookup/query
  • Guided analysis
  • Exploratory (unguided) analysis
  • Review of canned/pre-configured analyses

(Steve Few and probably others have proposed variants of this list, as well.) Dashboards often fail because they try to meet several of these data needs simultaneously, instead of focusing solely on meeting the “status monitoring” need. When dashboard creators add features and data to dashboards that support non-monitoring needs such as lookup/query, problem diagnosis or exploratory analysis, the ability of the dashboard to answer the basic "Does anything require my attention?" question always suffers, usually badly. The analogy that I use in the current draft of the book is that of Microsoft trying to support presentation creation, word processing, and spreadsheet editing needs within a single application. It might be possible, but it wouldn’t be pretty. That kind of chimera is what many dashboards are today, including many dashboards that are featured as examples of “good design” by dashboard development platform vendors. In the current draft of the book, I drive home the point that dashboards should meet the status monitoring need --and only that need-- by switching from using the broad term “dashboard” to the more specific term “status monitoring display” within the first few pages.

One display per data need

The book then discusses the seven types of high-level data needs in detail so that, when users ask for data or interactive features, readers can recognize what type of need they’re expressing and add the requested data or interactive features to the right display within a well-organized system of displays, each of which is specifically designed to answer one --and only one-- of the seven types of data need. If such a system of “need-specific” displays is well designed, users will seamlessly jump between displays as they experience different types of data needs, with each display being specifically designed to meet the type of data need that they're experiencing at that moment.

Feedback needed

Taking a page from one of my favorite authors (pun intended), Dan Pink, I’m going to attempt to crowd-source the editing of this book, and that’s where you come in. In the coming months, I’m going to be posting short excerpts from the new book as blog posts and asking for your feedback. If you hang around long enough, you’ll eventually see all of the major ideas in the book before it’s published (probably late 2018), but I know that you’ll still buy it anyway so I’m not worried ;-) If you think that any of my ideas are off-base --even the high-level ones that I’ve discussed here-- don’t sugar-coat. Tell me what’s what. That’s exactly the kind of feedback that I’m hoping for.

(To leave a comment about anything on this page, please do so on this identical blog post.)


Nick