This excerpt from my upcoming book, Beyond Dashboards, is the first in a seven-part series of posts on how to determine which metrics to visually flag on a dashboard (i.e., with alert dots, different-colored text, etc.) in order to draw attention to metrics that require it. In this post, I briefly discuss why visual flags are almost essential in order for dashboards to deliver the user traction, satisfaction and value that organizations hope for and expect, and why a lack of visual flags has contributed to the failure of many dashboards.
I frequently encounter the misconception that, for a given set of data, it’s possible to design a chart that will be useful regardless of the audience or the reason why that audience might need to see that data. Such “general purpose” charts don’t exist, though, since any visualization of a given data set will inevitably serve some audiences and purposes well and others not. In order to create a useful chart, then, the target audience and reason(s) why that audience needs to see that data must be identified beforehand.
People often assume that beautiful, artistically impressive charts are highly informative, but they’re usually far less informative than simple, “boring” charts. If we consider such charts to be “data art,” then there’s no issue with them because their main purpose isn’t to be informative. But if, as many people do, we consider them to be “informative” charts, they tend to perform far worse than simple, artistically unimpressive charts.
I’ve seen a lot of dashboards that failed to meet users’ and organizations’ expectations. There are a variety of reasons why this happens and, in this video and post, I focus on one of the most common ones, which is that the people who designed the dashboard didn’t fully understand the distinction between status monitoring and performance monitoring. When this happens, the dashboards that they end up designing don’t fulfill either of these needs well. This video and post are based on a chapter from my upcoming book, Beyond Dashboards.
When dashboards fail to yield traction and satisfaction among users, dashboard creators often blame the visual appearance of the dashboard (colors, layout, fonts, etc.). Based on my experiences designing dashboards for many organizations, however, I now believe that the way that metrics, other data and interactive analytical features are organized onto an organization's dashboards, reports, self-serve analysis tools, and other types of information displays is another, possibly even more important cause of dashboard failure. A new book on which I'm working, tentatively titled Beyond Dashboards, proposes a framework for more logically organizing an organization's information and interactive analytical features onto its various types of information displays, thereby eliminating the most common user satisfaction and productivity problems with those displays.
In March 2016, I guest-wrote Stephen Few's popular quarterly VIsual Business Intelligence Newsletter. The topic was one that came up often enough in training workshops to merit a longer write-up (i.e., a "deep dive"): how to visualize data sets that include a combination of very small values (i.e., close to zero) and very large values (i.e., far from zero). Creating a standard line or bar chart based on such data sets yields bars or lines that are too small to visually estimate or accurately compare with one another, so the newsletter suggests some creative solutions to address this common challenge.
Richard Nisbett's Mindware: Tools for Smart Thinking should be required reading for every university student (or anyone else who wants to make fewer reasoning errors). The book consists of an eclectic but extremely practical collection of "tools for smart thinking", covering concepts as varied as the sunk cost fallacy, confirmation bias, the law of large numbers, the endowment effect, and multiple regression analysis, among many others.
In many modern data visualization software applications, users can hover their cursor or finger over any bar, dot, box, line, etc. to see the exact, textual value(s) of each element. Since this allows users to see exact values whenever they need to know them, does this mean that graph designers no longer need to worry about how precisely values in their graphs can be estimated visually (i.e., without seeing a tooltip)?