It has been said that “if we can’t measure it, we can’t manage it”. But do we have the right measures? And how much do they really help us manage and be prepared in an ever-changing world? At a recent network group, operational managers and planners were looking at how they measured productivity and personal performance. The first speakers were all clear about what they measured, how these metrics were calculated and how this linked to the budget. That’s good, because not everyone else was! Yet, their front-line teams didn’t like the measures; they weren’t motivated by them. Hence the measures weren’t trusted; there was stress and potential conflict. It’s a timely reminder that the true success of any measure lies in the behaviour that it drives.
A metric can serve many purposes
“Choose what you target wisely”, advises David Preece at QStory on page 83, “for if you get this wrong, you could be promoting a whole range of undesirable behaviours and outcomes”. Take schedule adherence where he identifies three common pitfalls. Firstly, people often treat it as a target, rather than a measure of variance (a ‘tolerance’), reflecting realistic expectations that you build into your budgets and plans. Secondly, people don’t measure it alongside conformance, yet they are twin metrics, showing the sides of the coin. Finally, some people just ignore it, because of all these difficulties, and that only leads to anarchy and confusion.
A measure can serve many purposes – as a diagnostic tool, or as a key assumption in your budget or plans, or as a target that motivates aspirational performance, or a way of categorising performance. In the real world, we are not untangling the different requirements as we need to. When we do, however, people become focussed on ‘what matters most’ in their role. And if we simplify, we remove ‘noise’ or distraction. Stop to think for a moment about your front-line teams. Are they stressed or motivated by targets that impact their bonus or have performance management consequences? Do they need to know what measures you track for your own purposes? What about assumptions in the budget or plan? What are the resulting behaviours you are looking for if you do share this information? Take handling time (AHT), which is fundament for budgets but often leads to headaches, especially when accountability is drilled down to an advisor level or when league tables are published. If you understand that you can stop targeting or publishing this data, but still use it as a diagnostic, then you have a much better way to start getting advisors on board. This is now the norm in most call centre operations. It doesn’t mean the measure is unimportant. It is a critical part of your resource plan and budget. It is useful information in a coaching conversation too, to stimulate curiosity about how to structure customer conversations or how to use systems in new ways.
Diagnostics: understanding variability
With performance, we need to diagnose problems before we find the best way to solve them. So start by looking at data variability, to spot outliers or trends, and clean up the data on which our plans or recommendations depend. Early in the pandemic we launched a series of learning modules on descriptive statistics, at The Forum, to help members with this. In particular averages are often misleading, far too common as performance metrics. So dig under the surface, to find the confusing numbers and then explain them. Look for reasons that explain the patterns we observe and apply a good range of hypotheses to test in your diagnosis.
For instance, in digital or back office operations, special care is needed with actual and planned handle times. “Variances may be down to how individuals tackle the work, or their time being incorrectly logged” explains Graham Watson at Capita, “or there could also be a mix of work types, and you haven’t separated them out yet”. Or as Ollie Harrex at L&G puts it, “I need to understand an individual’s handle time accurately, not to manage their performance, but to be sure I can get the right overall figure for each process.” This means we need people to trust us.
In sensitive areas like productivity or performance, measures that are also targets can be impacted by how people record their time. If people start to play the system, own the problem and target differently, rather than point the finger of blame. On the positive side, gamification can help engage people more if you understand your role in making this happen. Often, it’s good to look at variances from target and you may use league tables. But try going deeper. Track trends, over time, on a range of metrics, not one in isolation. Drill down by individual, team or type of activity and customers. Use hotspot and weekly trend reports for tactical quick wins. For leadership, present a big picture view of granular data, with trends and control charts for instance.
Help people see the difference they can make by certain changes in areas within their control, and what they need to do to respond to changes outside of their control. Using if-then models like this helps stakeholders (at all levels) understand what they can do to make a difference, front-line teams as much as business leaders. This is important for quality and knowledge teams to undertake, as much as planners or insight teams.
Our responsibilities flow from this need to understand the data. In the workflow planning team at Legal & General, for instance, some manage the stakeholder engagement, some the process optimisation and some the data analysis or system configuration. Some functional teams may lack specific skills and need to draw on others. In smaller teams, one person may have multiple roles. Equally, people may collaborate across teams (to draw on different strengths) and in larger teams you may specialise roles based around an individual’s skill set.
How to use planning assumptions
Assumptions in planning models and budgets are a special case of diagnostic measures, because of how we use them. We build an analysis model of our operating model, which we can use to predict the impact of changes to variables. For instance, resource requirements are driven by
- predicted demand (contacts or tasks x handle time)
- forecasts on what will reduce our supply of resource (shrinkages or abstractions)
- decisions about outcomes, like service levels
We also need to analyse other kinds of performance. Model the factors that drive sales, satisfaction, technology cost, time to competency, and so forth.
It’s the end result that matters. So you will need to join up the data, by modelling cause-andeffect, to predict the impact of changes. This is the foundation for diagnostic insight. Think holistically, to understand both root causes (hindsight) and ultimate effects (foresight). Watch for the impact of changes in the handoffs across the planning cycle too. Many members still analyse and report each assumption independently, but this is to be avoided as the connectedness of our data is what most needs surfacing, with clear actions. How we handle calls will impact future or repeat calls. The time that we allow for advisor’s development, and the methods that we use, can impact handle time or customer satisfaction. Do we understand the key drivers of customer satisfaction? Is it actually service level, speed to answer or whatever is the outcome in our models?
Furthermore, after our budget is agreed, someone may find and deliver on an opportunity to reduce repeat contacts. If this lowers the overall volume of contacts, our budget forecast is now too high. Equally,
if it mainly removes the simpler, shorter contacts, then budget handle time will be too low. For planning purposes, we can re-run the capacity model and adjust, based on overall impact. However, if we have targets for individual advisors or teams that are impacted by this change, we now have a problem — as we do, if we’ve set targets on forecast accuracy for planners. Targets distract from the pure process of diagnosis and can make our operations more complex to adapt quickly. So, in fact, we become demotivated and overall costs rise, with the causes often invisible. Not a desired outcome surely?
How to focus on improving performance
As professionals, we need to be champions of improvement. This should guide the data we collect, model and report, so that it drives the right actions. Simplicity is absolutely key here and, to the surprise of many, this means more data not less. Certainly, avoid confusing people with lots of targets or reports; this won’t drive improvement. Instead, set up diagnostic dashboards as automated measures, linked together to show the overall impact of time and cost spent on key outcomes. With automation, your analysis really starts to take off because you are not limited by the time it takes to produce it. You can actually measure almost anything.
However, measuring doesn’t mean publishing. To keep things simple and focussed, you need strategy and governance on what is shared. How? When? With whom? Classic change tools, like RACI Charts or Stakeholder Maps, are valuable aids. And you need to engage people, so whatever data you share is understood and trusted. Here’s a good reality check: is your data supporting collaboration? Can you measure the resulting improvements? I recommend starting with a clear list of diagnostic measures, which can be further evolved over time, and a process of engagement around definitions and actions that you want to drive.
Diagnosis is a structured approach to problem solving. You can apply a scientific method (using observed data to test hypotheses), check for statistical significance, and create a single, trusted version of the truth. Above all, the measures in your diagnostic dashboards need to be predictive (see the 4 Stages of Insight), not just to look back at root causes but forward at the impact of changes in key drivers. Furthermore, scenario analysis is emerging as a key tool, where assumptions on a combination of variables form the hypotheses you test. Aim for two things. (1) Can you automate analysis for key scenarios? (2) Can you model causal relationships and link them to your strategic goals in your dashboards and models? Success depends on involving a diverse mix of experience, and great engagement.
How to set the right targets
Creating a performance culture to really focus on exceptions in this way is vital. First, break your targets down, with practical examples of the behaviour you want to drive, and life is immediately simpler, usually with happier people. Yet often we go instead for blanket coverage, without counting the cost or assessing the benefits. Secondly, when you deploy targets, define what is good enough, given there will be limited time and resource. We can be within budget, with happy customers, even if we are ‘below target’ on many individual drivers. Thirdly, think about the actions you want to drive. If it’s for diagnosis and learning, don’t report it widely. If a measure is averaged over a month (or day), don’t go into overdrive at the end, to catchup on low performance earlier on. Well, perhaps with revenue, but not service level? Fourthly, plan out what you communicate to whom and focus on what matters most. Finally, step back to consider the different types of targets (see box). Which of these motivate the behaviour you need? Then compare them to your existing targets. Where did those come from? Suppose we target 90% of emails within 24 hours. Do we want exactly 90%? Or at least 90%? Is 88% good enough? Now let’s contrast targets with planning assumptions. Targets are what we want to happen. We should plan on what we expect to happen. So they may not be set at the same level. This can be expressed as an acceptable range of performance (a tolerance level). Whereas aspirational targets may be set to push us to be better, achievable but stretching. You need to define the behaviour you are looking for your targets to drive.
Be aware too that the measures we use for reporting often set expectations, whether or not they are formally set as targets. So, be on the front foot with your communication and engagement. Much uncertainty and complexity are removed when you define what it means to be performing at different levels on any measure, why this matters, and what the actions are that follow from this. We’ve seen huge success from people using these principles to relaunch their Quality Frameworks. When colleagues understand thoroughly what matters, and why, a lot of stress is eliminated, and people can be more committed to improvement. From a communications perspective, this is a performance playbook, and these should be part of the way you define your operating model, with triggers and actions. So, what measures should you include and how do you define them? Let’s finish with some examples.
How to analyse time and productivity
Productivity means many different things to different people. There is no standard recognised measure or consistent definition, and it can be highly emotive, often correlated with mental health issues and stress. Mathematically, productivity can be defined as the ratio between output and input (or value and cost), but these are hard to define and connect. So, rather than attempt this, dashboards and scorecards often contain several important, but unrelated, metrics like FCR, AHT, adherence, quality, compliance, revenue and CSAT. This may be well-intended, as an attempt to manage them and create balance, but in practice it embeds complexity and ambiguity, because we tackle them as independent elements, with all the problems that brings in its wake. Instead we need to explain how these factors combine, to drive ultimate productivity. That way we simplify and empower. That’s our responsibility as analysts, managers, and planners.
Time is the biggest driver of cost in most customer operations, so let’s look at time utilisation first. Be sure to measure 100% of people’s time, and carefully analyse variances to clean up the data and categorise it in useful ways. Distinguish between being ‘active’ and ‘productive’, there is no value in just being busy, if we are not creating value. Explain how the time is used versus its purpose. You can see that success lies as how we engage and communicate, as well as how we visualise numbers. Typically we will need to look at where time is spent in four ways:
- Out of the workplace, with time off for various reasons and other absences, some of which can be accurately predicted.
- Offline time, including training, coaching, meetings, 1:1s, comms, which can usually be planned for and tracked.
- Personal time for meals/breaks and other reasons, or ‘lost time’ tracked via conformance & adherence measures. Systems are key to how we manage this and our success has much impact on how people feel.
- Work activities by type, measuring handle times for tasks or contacts, and time for travel (in field ops) or gaps in work (like available, hold or transfer time in the contact centre).
The challenge is to ensure that each category is used effectively, and that work is completed in the most productive way. You need diagnostic measures in all these areas, able to drill down and compare variations by journey, channel, individual/team and other ‘meta data’. Of course, the way that you use this information will vary, from front-line teams to product owners or people managers and senior leaders. Looking at individual elements in isolation can help you diagnose issues like holiday booking, slow or complex systems, and then the overall impact can be analysed if you have the right models. However, when it comes to managing productivity, the sad fact is that many targets do not drive the behaviour they need. So at the least we need to strip them down, possibly even start afresh. At the same time we need to map the cost-value-chain, for a truer measure of productivity. The starting point lies in understanding contact drivers, but we can go so much further once the basics are in place, with many root causes lying beyond the contact centre. The idea of a ‘value-chain’ comes from marketing and is connected to failure demand in Systems Thinking. Analysis of cost by type of customer journey has been pioneered by firms like Amazon and will be familiar to Forum members from the work of Peter Massey at Budd (“The best service is no need for service”). Here ‘skyline graphs’ or metrics like ‘cost per customer’ or ‘cost per £sold’ are better as productivity metrics than ‘cost per contact’, which rewards operations with many repeat contacts. There is great work, however, by members like EE and BT, to show how cost per contact can drive improvements. (Case studies in 2016 & 2018). For strategy decisions you will want to overlay this data with projected volumes, propensity to buy and lifetime value.
Focus on customer & colleague experience
Two types of change actually power improvement, as the improvement cycle demonstrates. Our performance culture needs to embed this focus, setting up the data journeys, handoffs and learning tools that enable this.
- Process or job design, which is the way we make customer or colleague journeys and lifecycles easy and engaging.
- Changes in individual behaviour, for colleagues or customers, which is where people meet numbers.
We can model the data on this and track it. That’s the specialist capability we can bring to the table, as planning, insight, quality or knowledge teams and operational managers. In addition, some people add technology as a third driver; it is certainly a key enabler of both. Of course, systems in themselves will not drive improvement. We need to operationalise technology, by making visible the links into process design and colleague or customer behaviour. Technology will enable changes to customer or colleague journeys (process) or behaviour.
There are some simple changes you could kick off with. Whenever you look at quality or handle time think about process, from the perspective of colleague and customer. Is the process followed? Is it efficient and effective? Are there improvements needed? Do you have the data to support this? Ideally you build checks around this into the process itself, rather than just rely on audits after the event. And you need to consider the role of the customer, with digital contact now part of half of all customer journeys. Net Promoter data can also be used to drive transformation in customer operations and there is a great example from BT-EE and Plusnet (2022 Case Study). Here the data is categorised by customer journey or process, as well as the accountable department or channel, so that there is a really clear focus on what needs fixing. This is akin to the work mentioned earlier on skyline graphs or the cost-value-chain. Again many of the root causes lie outside the customer operation, so a company-wide approach is essential if you are to tackle the issues and drive improvements. With customer surveys people often derive really useful insight from the verbatim comments, and they are great for adding emotional colour that makes the experiences real for people too.
Defining your performance culture
As well as distinguishing diagnostic measures and planning assumptions from targets, you need clear discipline about how you share information, and the operating rhythm that embeds this. A great performance culture doesn’t happen by chance; it needs purpose, design, and belief. For instance, it’s good to keep discussion about operational matters in the groups that can actually take the actions required. So, elevating operational metrics to the board is a sign of danger. Certainly, metrics like service goals, or absence/attrition need to be discussed at all levels as part of the corporate budget process (including budget reviews). And it’s good to set performance objectives that hold managers to account on the assumptions built into budget scenarios and agreed operating models. But we muddle the situation when we share the same data and metrics with everyone, the old school approach, and this has serious consequences.
If we are brutally honest, we often have only ourselves to blame if there is confusion around these matters. Yes, sometimes the problem lies in particular relationships or cultures that prevent the right information breaking through to decision makers. However, analysis from our best-in-class examples makes one thing clear. A good first step is always to focus on your framework for diagnostic measures first, and to build clear definitions of your key measures into a performance playbook or interactive dashboard that helps you understand why this matters and what you need to do as result.
Author: Paul Smedley & Ian Robertson
This article was first published in the 2022 Best Practice Guide - You Moment of Truth: Confident to Succeed
To download a full digital copy of the Best Practice Guide, click here