I’ve seen a lot of dashboards that failed to meet users’ and organizations’ expectations. There are a variety of reasons why this happens and, in this video and post, I focus on one of the most common ones, which is that the people who designed the dashboard didn’t fully understand the distinction between status monitoring and performance monitoring. When this happens, the dashboards that they end up designing don’t fulfill either of these needs well. This video and post are based on a chapter from my upcoming book, Beyond Dashboards.
When dashboards fail to yield traction and satisfaction among users, dashboard creators often blame the visual appearance of the dashboard (colors, layout, fonts, etc.). Based on my experiences designing dashboards for many organizations, however, I now believe that the way that metrics, other data and interactive analytical features are organized onto an organization's dashboards, reports, self-serve analysis tools, and other types of information displays is another, possibly even more important cause of dashboard failure. A new book on which I'm working, tentatively titled Beyond Dashboards, proposes a framework for more logically organizing an organization's information and interactive analytical features onto its various types of information displays, thereby eliminating the most common user satisfaction and productivity problems with those displays.
In March 2016, I guest-wrote Stephen Few's popular quarterly VIsual Business Intelligence Newsletter. The topic was one that came up often enough in training workshops to merit a longer write-up (i.e., a "deep dive"): how to visualize data sets that include a combination of very small values (i.e., close to zero) and very large values (i.e., far from zero). Creating a standard line or bar chart based on such data sets yields bars or lines that are too small to visually estimate or accurately compare with one another, so the newsletter suggests some creative solutions to address this common challenge.
Richard Nisbett's Mindware: Tools for Smart Thinking should be required reading for every university student (or anyone else who wants to make fewer reasoning errors). The book consists of an eclectic but extremely practical collection of "tools for smart thinking", covering concepts as varied as the sunk cost fallacy, confirmation bias, the law of large numbers, the endowment effect, and multiple regression analysis, among many others.
In many modern data visualization software applications, users can hover their cursor or finger over any bar, dot, box, line, etc. to see the exact, textual value(s) of each element. Since this allows users to see exact values whenever they need to know them, does this mean that graph designers no longer need to worry about how precisely values in their graphs can be estimated visually (i.e., without seeing a tooltip)?