Many performance dashboards, scorecards or reports use traffic lights (or RAG scores – red, amber, green) to provide a quick visual indicator of performance levels of KPIs (Key Performance Indicators). Most BI tools have a set of icons that can be used to provide these displays, with the implication that red is poor, amber is at risk and green is good. Some dashboards also include a second level of icons to show the direction of travel, although these combinations can lead to confusion.
The question is, are these icons really helpful or do they risk covering up actual performance issues? Do persistent reds end up being ignored? Are the thresholds for colour changes clear and unambiguous? Are seasonal variations understood?
I sometimes suspect that traffic lights are a lazy way for:
- the management team to avoid have to invest any real effort in understanding the behaviour of the KPIs that they should be monitoring; and / or
- the BI team to avoid producing any more meaningful displays.
The challenge then is to find better ways to communicate performance in ways that are easily understandable, making best use of the analytical and presentational tools that are available.
A KPI is typically designed to summarise performance in a single metric. If it has been well designed it is SMART (this acronym seems to have various meanings but broadly it refers to Specific, Measurable, Assignable, Realistic, Time-related). However, even if these attributes exist, in other words the KPI is well designed, they are not often exploited very well. An indicator on its own has limited value. Management needs to be able to understand past performance, monitor current performance and anticipate future performance. They should be able to answer questions such as:
- Are there any significant changes in activity or performance?
- Can their cause be identified?
- Is performance where we need it to be?
- Even if it is good, is it as good as it could be?
- Can future levels of KPIs be anticipated?
- What are the consequences of changes in performance?
A traffic light on its own, even with a direction indicator, does not help answer these questions very effectively. In my experience, vast amounts of effort may have been expended in collecting the data to generate the KPIs but very little effort on how to utilise the data fully. However, there are some readily available means to display these values more easily.
KPI Display Options
The quickest way to see how a KPI is performing is to display it in a line chart.
The obvious advantage of this graph is that it shows the current value (the green line) just below the target value in the most recent month. This might generate a red traffic light (depending on the rules implemented), but in this graph it is possible to see the changes over the previous months and the relationship to the previous years’ performance. This provides an immediate perspective on the current performance.
The disadvantage is that it takes more space on any report, scorecard or dashboard. The same space could be filled with a wider range of information, for example information on a KPI broken down by some category such as region. However, it is possible to use simple sparklines (a very small line chart, typically drawn without axes, to present the trend of the KPI in a simple and highly condensed way) to save space , as shown in this example.
However, a well-designed set of five or six KPIs, displayed in clear line charts, possibly with a choice of reference lines (for example previous year, target, budget or benchmark) should be sufficient to display a compelling overview of the current performance of an organisation. An option to drill down to more metrics in specific areas or break down metrics by categories such as region would be a useful addition without compromising the clarity and immediacy of the front page.
There are some notable benchmarking facilities (for example Housemark in the Social Housing sector) which help to contextualise performance against similar organisations. A simple way to display relative performance (at a point in time) is to use a column chart like the one below. This provides a very quick reassurance that current performance is in the middle of the distribution and shows the shape of the distribution of values. In this example it may be that only the top three and bottom five values that are significantly different from the mean value, as shown by the horizontal line.
Comparators can also be based internal data, with values compared against other parts of the business.
KPI Analytical Options
Rather than simply rely on the users reading a scorecard, an alternative is to generate alerts that draw attention to significant changes or outlier values.
There are two main types of alerts, by value or by change. The value alerts are easiest to compute as they are based on the highest or lowest values of any KPI, in other words outliers. Change alerts are harder to compute and vary depending on the KPI or type of organisation. Some KPIs are very stable, others vary enormously from month to month making target setting difficult. The goal is to identify what is called special cause variation, in other words changes that are different from those expected to be driven by seasonal variation or long-term trends on the one hand or random variations on the other.
These may be particularly helpful to highlight issues at lower levels of an organisation which would be missed at a high-level scorecard aggregation.
A further concern about reports, scorecards and dashboards being dominated by traffic lights is that they are static. Not only do they pay insufficient attention to trends and unusual performance, but they do not help very much in anticipating future directions. Whilst forecasting is, by its nature, uncertain, that is not an excuse for doing as much as is reasonably possible to anticipate future activity. Some of these points have been explained in more detail in our earlier blog post about predictive analytics. A simple way to display them is simply to add dotted lines to the end of each trend line.
Whilst scorecards can provide a quick snapshot of performance, weaknesses are:
- They do not show recent history or comparative performance in the way that some graphs can do more efficiently
- Depending on the level of the aggregation used to construct them, underlying variation in performance can be missed, and alerts might be used to identify these
- They lead to a focus on the current or recent past and may divert attention from anticipating future performance
My hypothesis is that there are enough good data visualisation methods, coupled with advanced BI and analytical tools, to provide much more intelligent reports, scorecards and dashboards. It may require a step change in the analytical comprehension of boards and senior management but the reward is a firmer grip on current activity and a more robust anticipation of future performance.
by Simon Musgrave