Scorecarding, a powerful management instrument
Scorecards are a very powerful management instrument. They are at the heart of a Performance Management architecture and they display some well-chosen metrics, including targets and their status, which allows the management to get an overview at a glance on how well their company is performing in the execution of its strategy and derived goals. The metrics are selected in function of the company’s strategy, so that the scorecard becomes an environment, not only to measure the execution of the strategy, but also to communicate the strategy inside the organization. It gets everybody to focus on what really matters.
Scorecards have been introduced in many companies, but that does not mean they have always been introduced successfully. In some cases, the project has failed because the management has lost faith in the tool, typically due to lack of automation or data quality issues. In other cases, the scorecard has moved away from its initial intent to help implement strategy, and has become a mere monitoring or additional reporting system.
This document discusses some do’s and don’ts in implementing and automating scorecards. The idea is not to describe the theoretical vision on scorecard projects, but to provide some lessons learned from the real world so the pitfalls that are common to many scorecard implementations can be avoided. Some of these should already be considered during the business project that precedes the implementation project, i.e. the translation of the strategy into a set of scorecard metrics.
Three layers of scorecards
Scorecards can succeed or fail for a different number of reasons. We can distinguish 3 dimensions in evaluating scorecards:
- Conceptual: the definition of the scorecard, as well as its KPI’s.
- Architecture: the automation and visualization component of the scorecard.
- Process: the way in which the scorecard is implemented and used as a Performance Management instrument in the organization.
We will discuss our recommended Best Practices, as well as pitfalls to avoid in each of these categories.
The "Conceptual” dimension
Derive KPI's from the strategy
Scorecards are meant to measure strategy execution. It is therefore obvious that KPI's who are monitored in the scorecard should be the representation of a Critical Success Factor of the organization's strategy.
Strategy maps are a way of providing a macro view of an organization's strategy, and provide the organization with a language in which they can describe their strategy, prior to constructing metrics to evaluate performance against their strategies. A strategy map is a visual representation of an organization's strategy and shows the cause-and-effect relationships between critical success factors.
Using strategy maps to describe the company's strategy has the big advantage that the strategic metrics can be naturally derived from the strategy map. If metrics not related to the strategy are included in the scorecard, they might become the focus of undesired or unproductive management actions.
Define SMART metrics
It is quite obvious that the choice of good metrics is vital for the scorecard.
Aside from the fact that metrics need to measure some aspect of strategy, there are a number of criteria that we can use to find out whether a metric is suitable for the scorecard. The paradigm we use for this is SMART:
Specific: Specific means that the objective is concrete, detailed, focused and well defined. The objective must be straightforward and emphasize action and the required outcome. Specific also means that it is result and action-oriented. Metrics need to be straightforward and communicate what you would like to see happening.
Measurable: If the metric is measurable, it means that the measurement source is identified and we are able to track the actions as we progress towards the objective. A good metric is also measurable at a reasonable cost. If you cannot measure it ... you cannot manage it.
Achievable: The metrics and targets that are set need to be capable of being reached; there should be a likelihood of success but that does not mean easy or simple. The objectives need to be stretching and agreed by the parties involved.
Relevant: Linked to business objectives, the owner needs to have the power to influence the result.
Time-based: The time dimension should be specified, not only how often the metric is measured/reported, but also by what time certain targets need to be met.
Frequently evaluate KPI definitions
KPI's are often defined at the end of a cumbersome and labor-intensive process involving a lot of management input in the form of interviews, workshops and brainstorming sessions. As this is an intensive process that cannot easily be repeated every 6 months, KPI definitions tend to stay around for quite a while, and often too long.
An organization and its environment are subject to change and may cause a strategy to change. This means that KPI's, once they have been defined, need to be subject to regular review in order to establish to what extent they still accurately reflect and measure the strategy and its execution.
While it is not realistic to expect a full review with all senior management on a frequent basis, the responsible KPI team (see later in the Process section) should be able to highlight the KPI's that may be subject to change and that need to be re-evaluated or re-defined by their respective owners. In general, a yearly review process is sufficiently frequent for most types of scorecards and organizations.
Define too many KPI's
It is not uncommon for executives or managers to get carried away once they are in a productive brainstorming session during which they can come up with dozens of different KPI's that are relevant for measuring the strategy. This result often devaluates the concept of "Key” Performance Indicators, which essentially means that this should be a limited set that offers a quick view of the current state of the business compared to its targets. Too many KPI's clutter the scorecard and reduce its usefulness.
A skillful moderator is often required at this kind of sessions to keep the amount of KPI's that pop up limited to a reasonable number. Deriving the KPI's from a strategy map often helps, as a strategy map in itself usually does not contain enormous amounts of critical success factors.
Another issue with defining too many KPI's is that the cost of measuring them and therefore building the scorecarding solution increases.
Confuse the scorecard with a dashboard
This is a topic on which entire books have been written. Furthermore, the difference is not always black-and-white.
Some of the important differences concern the following:
The metrics on a Balanced Scorecard are a representation of the company's strategy. The metrics on a Dashboard could be anything that is interesting to monitor and measure, and typically has a more operational focus.
Metrics on a scorecard always have a status; it should be clear when action needs to be taken. In Dashboards, just visualizing the current value can often be the only aim of the measurement.
Metrics on a Balanced scorecard always have an owner; he or she is responsible that appropriate action is taken when needed. In Dashboards often no ownership is required.
Dashboards tend to be refreshed much more frequently than scorecards; they sometimes show near real-time information.
A scorecard typically includes the functionality of communicating the undertaken action to the other users and allows to follow-up on the undertaken action.
In summary, a dashboard tells us "How are we doing?” - a scorecard tells us "How well are we doing?”. So if you find you are spending a substantial budget of your scorecard project on assuring that intraday data is collected and displayed, think again! You are most probably building an operational dashboard.
The "Architecture” dimension
Build a solid foundation
A scorecard application usually pulls together data from across the entire organization. Implementing or automating a scorecard therefore means integrating information from different data sources. From an architectural best-practice point-of-view, this is done based on a solid data warehouse. In fact, often a well-defined, automated scorecard only becomes achievable if the company already has a data warehouse covering many different information subject areas.
The main reasons for this are:
- In a multi-source environment, there is a danger that different reporting systems produce different figures for the same metric. A data warehouse allows all Performance Management systems (including scorecards) in the organization to be based on "one single version of the truth”.
- A good data warehouse architecture will also help you to spot and deal with data quality issues. When the data cannot be trusted, this will soon become the one and only problem of the scorecard project (as for any Performance Management project), as all other issues will become details compared to this.
- Integrating the scorecard project into a broader performance management architecture also allows linking the scorecard to detailed reports and analysis tools. This is essential –next to bringing confidence in the figures- for getting to the root-cause of discovered problems through "drill-down” and analysis over the different business dimensions. This is the only way to understand what actions might bring the KPI again in line with its tolerances and initiate change.
- In an optimal Performance Management architecture, the data warehouse will not only hold historic actuals, but also budget figures. Typically (see later), these budget figures will be targets for KPIs in the scorecard. This makes the data warehouse the single location where both budgets/targets and actuals are stored at a comparable level. In case of revising plans, budgets or forecasts these changes can be made seamlessly in the scorecard targets.
If a data warehouse is not yet in place, this is a serious threat to the success of the scorecard automation. While in theory some of the benefits of a scorecard project can be achieved without a full Performance Management infrastructure, including a data warehouse, we recommend starting automating a scorecard from the bottom up.
Allow for "multi-dimensional" analysis
On the one hand, the scorecard is meant to be lean, only the key metrics should appear on the scorecard in order to preserve the overview. It allows the users to spot the areas where something is going wrong without getting lost in too many metrics.
On the other hand, it should be easy for the user to further analyze the trend that he has spotted. This is ideally achieved in a three-layer business intelligence architecture.
- The top layer is the scorecard itself, which is showing KPI's, their targets and their historic trend.
- The middle layer allows for interactive analysis (typically OLAP) of each KPI. Whenever a user wants to investigate the underlying reasons for a status or evolution of a certain KPI, this functionality will support slicing & dicing as well as drill-down of data. As an example, a decline of Sales in Europe can be analyzed by region/country, product-family, customer segment, retail channel and so on …
- The lower layer contains a number of predefined standard reports that show transactional detail level data for further exploration of KPI values, sourced from the data warehouse.
A scorecard with well-defined and correct metrics, but that does not allow for analyzing the relevant business dimensions of these KPI's, loses most of its added value. If the users are unable to research what is causing the status of a KPI, they will not know how to define actions in order to improve it, and the scorecard as an management instrument for improving performance might become useless.
Publish KPI metadata
Many metrics may mean different things to different people, unless they are very precisely defined.
For example, something as common as "revenue” may mean different things to different people, it may mean the sum of all sales for the sales manager, and the sum of certain accounts from the general ledger for the finance manager. What is included, what is excluded, what are the rules for assigning revenue to a certain time period, a certain division, …
Therefore, the business must document for every metric exactly what will be used to calculate the metric. (E.g. invoices with invoice-status equal to A and payment-status equal to B). To make sure this definition is complete and unambiguous, the business will most often need the help of a functional expert from the source system and/or a functional analyst from the business intelligence team.
Apart from the definition and calculation, there is a lot of other metadata that needs to be gathered for every metric, such as periodicity, target and tolerance levels, number of decimals, …
A good and complete documentation of every metric ensures that the technical implementation team knows exactly what to do, and that the business users know exactly what they are looking at.
It is good practice to provide a link from within the scorecard application to the documentation dictionary. Ideally this is done in a context sensitive way, i.e. the user is provided with the information about the metric that he is looking at. This is possible by organizing the company dictionary as a different document (word, pdf or html-page) per metric, and then linking it as a report to the metric type.
Calculate KPI's manually or in Excel
Excel is arguable the most widespread scorecarding tool in the world. For a number of obvious reasons, this is not an ideal scorecarding tool.
When KPI's are collected and calculated manually in Excel, a lot of information will be missing. Typically, they will be entered at the level that they are required, meaning there will be no underlying detailed information available for the user to further analyze the cause of a KPI status, as explained above. Furthermore, entering or calculating KPI's manually is often a cause for data quality issues. Values get entered or typed over incorrectly, formula's get overwritten, links between Excel sheets get corrupted, Excel sheets are hard to protect against accidental data entries, etcetera.
Excel 2007 has made substantial progress in functionality for displaying KPI status and trend, but it is still lagging behind in visualization & communication features that are offered by professional scorecarding software packages.
Focus a lot on navigation and look-and-feel
Users often will put a lot of attention on the look-and-feel of the solution, trying to make it as "sexy” as possible. Whilst a user-friendly solution might increase adoption and management attention & support, scorecards need to be more than that to become a success. Most scorecarding software solutions offer sufficient functionality out-of-the-box, allowing a scorecard project to focus on the things with real added value, such as metadata, data quality, correct calculations and underlying reporting.
Customization should be kept to a minimum. A scorecard is not an operational application that requires a lot of user interactivity. Typically, the user interactions with a scorecarding application are restricted to:
Viewing metric values (actuals & targets)
Viewing metric history (trend analysis)
Adding/reading comments regarding a metric value (at a certain point in time)
Adding/assigning/reviewing action items for a metric or KPI
Further analyzing metric behavior by going into a reporting or analysis module to look at underlying details
This functionality exists in most scorecarding solutions, so the front-end customization part (in order to make the end result look even more flashier) should not take more than a few percent out of your total project budget.
The "Process” dimension
Assign ownership to KPI's
KPI's with targets that do not have a clear owner will automatically receive less management attention. It is therefore crucial that each KPI has a single owner who will be responsible for the value and evolution towards target.
When assigning ownership it is key that (i) the owner should have sufficient management responsibility and budget to influence the behavior of the KPI and (ii) a mechanism is established that somehow links (financial or other) rewards for the owner to the value of the KPI.
Especially the latter is a difficult exercise, because the value of almost every KPI can also be influenced by external factors that the owner does not control. Care should therefore be taken that the rewards that are linked are related to the aspects of the KPI that are under control of its owner. Failure to do so may lead to frustration and lack of motivation on the part of the owner and ultimately have a negative impact on the way the KPI is being managed.
Ensure top-level sponsorship
The implementation of a scorecard should be embedded in a bigger business project that defines the scorecard metrics based on the company's strategy.
This business project should also address the change management challenge. Every manager will have heard of a scorecard, but this does not mean that they will immediately realize what it can do for them. They will need to be explained / trained on how the company wants to use the scorecard, and sufficient follow-up will be needed to make sure that everybody picks-up the new way of measuring and collaborate around the company's strategy.
To make sure that the change is accepted by the user, it is essential that there is a clear buy-in and sponsorship by top level management.
On the other hand, the (Balanced) scorecard should not be a top-level only solution. On the contrary, it allows top level management to communicate the strategy (which metrics are important) to the managers further down in the organization, and it empowers these managers by making them the owner of the metrics at their level.
Install a KPI team
Implementing a scorecard instrument is more than a technical project. The change management part of this should not be underestimated. The initiative needs to be driven by a group of dedicated and motivated people who spread the word within the organization, so that it is not perceived as just a "new toy” for executive management.
Given the fact that the change management needs to go hand-in-hand with the technical roll-out of the solution, the team should not only consist of business-side people, but should also contain ICT representatives who can influence the software & data warehouse implementation.
Keep targets fixed at all times
In the "Conceptual” chapter we made the case for occasionally evaluating KPI definitions (e.g. on a yearly basis).
It is even more obvious that KPI targets need to be revisited as well. Ideally, this revision process takes place every time the scorecard and its progress against targets is evaluated and is therefore much more frequent than the evaluation of the KPI definition.
Reasons for evaluating KPI targets can be the following:
KPI targets often result from budgeting or forecasting exercises. Every time these figures are reviewed during the year, this may cause a KPI target to change.
KPI's who continuously show a positive value, may have a target that is too easy to achieve and will therefore not be the subject of management intervention. It is worthwhile investigating if the target has been set sufficiently realistic and challenging.
In the beginning of a scorecarding project, it is often difficult to predict what a realistic target for a KPI might be. So it should be foreseen that in the first months of a scorecarding project, targets are frequently looked at to find out if they are in line with what the organization wants to achieve.
Both from a process & architecture point of view, especially for organizations moving to rolling planning & beyond budgeting, this again stresses the importance of embedding the scorecard not only in a BI architecture, but in a full Performance Management (BI & CPM) architecture.
Keep the scorecard for management only
Defining a scorecard is definitely a top-down process, where the starting point should be the corporate strategy. It is obvious that the first scorecard should normally be the corporate scorecard that will measure the execution of corporate strategy. But it would be a mistake, or a missed opportunity to say the least, to end the scorecarding initiative at that level.
The scorecard is an excellent instrument to communicate the corporate strategy and to also align the levels below the corporate level for the execution of this strategy. The next step – after a successful corporate scorecard – should be to cascade this down one level and start defining departmental / geographical / … scorecards (this obviously depends on your organization structure and responsibility centers). When doing this, care must be taken that the lower-level scorecard has a strong link with the higher level. Even worse than having no scorecard(s) at all, is having two departments working in different directions, where department A has a cost-cutting focus and department B is aligned for improving quality.
Certain companies, who have reached high level of strategy-focused maturity, find that this works well and have in some cases even defined scorecards down to the level of individuals.
Few management concepts have gotten so much attention in management books and articles, and at the same time had such a bad track record of failed implementations as the (Balanced) scorecard. This is due to the fact that there are many pitfalls for scorecard implementations.
It is very important to truly understand what a scorecard is, all the elements it incorporates, and the relation of the scorecard with the business. It may sound a little obvious, but a "Scorecard Implementation” project should follow a "Scorecard Definition” project, i.e. a business project where the strategy (preferably expressed in a strategy map) and the strategic metrics are identified.
If these precautions are not taken into account, the implementation of a scorecard will never result in the expected benefits for the business.
Besides these business-related pitfalls, there are also pitfalls from a more technical nature. The most important one, being to embed the scorecards in a Performance Management architecture build on a solid data warehouse. Taken these into account will insure that the balanced scorecard application will be controllable and maintainable with an acceptable effort.
Hopefully, this document with advice based on experiences from real life implementations, helps to avoid most of these pitfalls and drive your scorecarding initiatives towards reaching its full potential, namely an efficient execution of your organization's strategy.