Time based Key Performance Indicators provide significant insight to how quickly your organization responds to the needs of internal and external customers. They tell a story that every leader needs to pay attention to. Which of the following time based Key Performance Indicators (KPIs) are being tracked in your company?
- Time to answer a call
- Time to provide a quote
- Time to enter an order
- Time to collect past due invoices
- Time to resolve a complaint
- Time to change a tool or die
- Lead time for delivery to a customer or from a vendor
- Time to assemble a product
- Time to ship an order
Where you are capturing this data, are you using “averages” to drive improvement? The statistic commonly used for countless performance metrics, averages are easy to compute and understand. Intuitively, people trust that changes in the average reflect real changes in performance. When computed over a long-enough time period, averages become immune to “fluke” results. And as long as the flukes are truly rare and most results are within a relatively narrow, symmetrical band, the average does a reasonably good job of reflecting performance capability. So is there a problem with this approach?
More often than not, business processes generate time-based results that are not described very well by averages:
- The good results can only be so good; the bad results can be horrid. Once the timer starts ticking, you can’t get better than 0. On the other hand, there are plenty of circumstances that can delay the process for a long time. There is no way to make a “good” reading balance out a “bad” reading; it takes many good readings to do so. (This is the phenomenon otherwise known as “One aw-s$%t cancels a hundred atta-boys.”) The distribution is asymmetrical: heavy on the fast end with a tail on the slow end.
- Instead of one big symmetrical bundle, we get “clusters” of data. While the customer is getting frustrated, time is flying by and we measure it in intervals: same day, 1-3 days, 4-7 days, 1-2 weeks, and so on. Everyone knows it’s “bad” to run into the wall between intervals and throw the issue into the next time bucket, so there will be a rush to wrap things at the last minute. The result is a “multi-modal” distribution with more than 1 hump and a tail, like the back of a very odd camel.
- Even some “bad” results are justifiable. What if we need input from the customer before we can respond and the customer doesn’t get around to sending it? What if a request requires investigation into a new raw material, a new method, a new procedure? What if an out-of-the-ordinary concern is raised at the end of the day, just before a weekend or holiday?
It doesn’t seem fair to include those cases in the data, but not including them requires a defined logic and management time for handling exceptions. Soon the measurement process itself becomes cumbersome and there is growing incentive to identify new exceptions rather than focus on process improvement. This isn’t a productive use of management time. Sadly, using the average in these situations will typically generate more heat than light. It doesn’t reflect the good work being done at the “fast” end, and it will put unproductive focus on the “humps” slowing things down. Furthermore, as the data set grows, getting the average to move gets harder. Finally, the average rarely reflects real customer experience… and that is what should matter most!
“What really matters?”
You probably remember the advertisements for toothpaste: “4 out of 5 dentists recommend…” Easy to visualize, advertisers created a clear measure for “most of the time” at 80%. Converting our targets to “within X, most of the time” and “rarely worse than Y” resolves the issues of using averages and puts greater focus on what really matters to customers: getting a response within a reasonable time and avoiding “legendary failures”. Here is an example: Suppose it currently takes an average of just over 18 hours to confirm an order. Sounds awful…but look at the graph below to see the real picture. This company could in fact be confirming:
- 50% of orders in 4 hours or less
- 25% of orders in 5-8 hours
- 10% of orders in 9-24 hours
- 8% of orders in 1-3 days
- 5% of orders in 4-5 days
- 2% taking longer than 5 days.
If being competitive means that we must confirm within 8 hours, we are hitting it 75% of the time already! (Sounds much better than “average of 18 hours”, doesn’t it?) Would customers applaud an “aggressive” improvement in the average from 18 hours to half that… say 9 hours? Probably not… most of them are already getting their confirmations in 8 hours or less. The compelling target would be something like:
- At least 4 out of 5 orders (= 80%) will be confirmed within 4 hours
- No more than 2% of orders will require more than 3 days for confirmation.
This approach puts greater focus on improving the processes where it matters most while recognizing that not all orders can be handled at the same speed. The target ‘allows’ for exceptions that put some orders on a slower track, as long as “legendary failure” is avoided. This kind of improvement really does make a difference to customers, and the trend is easy to track.
The beauty of “targets by percentile”
It isn’t always easy to transition to this way of thinking, but once the mindset of using percentile targets is established, there are 2 levers to pull when driving for additional improvement:
- The percentage of cases that must meet the target can be pushed up. Instead of hitting the target 4 times out of 5 (an 80% success rate), push for an 85% or 90% success rate.
- The target itself can be strengthened: e.g., aim for confirmation in 3 hours or less.
Most companies have created management dashboards that are designed to keep the critical KPI’s in focus. But, if the the dashboards aren’t populated with the right metrics the result will be less than satisfactory, frustrating everyone in the organization and not delivering the performance improvements that are critical to you success.
Setting effective targets
Talk with Group50’s Performance Management Consultants about the time based key performance indicators and targets you are setting and the best way to use your targets to drive performance improvement that matters. We offer experience, objectivity, and facilitation skill to help “middle market” firms tighten up on strategy, improve operational execution, and drive better performance.
About the author:
Martha Rollefson is Group50’s Supply Chain Performance Improvement and Quality expert. She has multi-functional experience in the chemical and consumer product industries. Her expertise also includes Lean techniques, customer-focused performance metrics, and system-based solutions as well as the implementation of SAP. Martha and the Group50 team are all former executives from well-known manufacturing and distribution companies who understand what it takes to design and successfully implement a company’s strategic plan. Group50 has designed a series of continuous improvement assessments, workshops and strategic execution tools that will optimize business performance. Call us at (909) 949-9083 or email us at firstname.lastname@example.org.
- FIVE THINGS YOU NEED TO DO TO DRIVE CONTINUOUS IMPROVEMENT – PART III
- To DC or not DC that is the question – Data Center (DC) and Cloud Strategies – Part I
- INOCULATING INFORMATION TECHNOLOGY AGAINST ANTI-STRATEGY – PART IX
- Five Things You Need to Do to Drive Continuous Improvement Part II
- Business going increasingly Digital? – Rethinking Business Continuity and Disaster Recovery
- A Strategic Approach to Assessing IT Infrastructure
- Supply Chain Modeling and its Importance
- Kick Your Total Cost of Ownership – TCO Process up Another Notch
- Building Blocks for IT Alignment and Integration
- Cost Takeout as a Strategy