top of page

THE ASSUMPTION PRESUMPTION


Why do businesses make key decisions based on assumptions rather than actual data?


It is a question I find myself regularly asking, particularly when working with clients in the financial services sector. Commonplace as it is, the practice is misleading and inherently risky.


Dangerous assumptions

For example, a common assumption relates to service level agreements. The assumption is that the agreed service level equates to a good level of service, thus if the specified service level is being hit, performance must be good. Data on the achievement of service levels is almost always readily available.


However, when I ask for actual data on what customers predictably ask for, what matters to them and how well the system currently achieves it, it’s a different story. In fact I’ve yet to work with an organisation that could provide that information without an intensive exercise to collect it from scratch.


That’s alarming enough. Even more alarming and damaging is that when customers complain of poor service, the organisation frequently responds by arguing that it must be the customer’s problem – after all, the service level is being met so how could it be the organisation’s fault?


Another common operating assumption is that higher volumes will automatically translate into better results – for example, more outbound calls will result in increased sales or more arrears collected from customers in debt.


In these systems, data is readily available about activity – the ‘dialer spin rates’ (the number of calls made by automated dialer IT systems), how many times the phone is answered and such like. Yet tying together actual data about the type and frequency with which customers bought or if not why not, or the type and frequency with which customers paid and if not why not, often requires a whole new exercise to learn about these important levers for understanding and improving the system.


Assumptions vs data

How and why does this perilous over-reliance on assumptions arise?


One common response is that in a large and complex business real data is hard to collect and calculate. That may be true, but even so it should set alarm bells ringing. Shouldn’t the organisation be focused on learning how to capture and use actual data to make informed decisions, rather than presuming that untested assumptions are the right ones?


Another less obvious and more alarming possibility emerged from a conversation in which a friend relayed to me a question his science-student daughter had posed over dinner: ‘Dad, why is it that in the world of science most hypotheses are disproven, yet it seems from our discussions that most hypotheses in the business world somehow find a way of being proven’?


Good question.


Is there so much pressure in modern business to make the budget and hit the targets that organisations prioritise ‘proving’ hypotheses over doing the hard work of learning how to improve the real data? In other words, if the data suggests that the decision to use the assumption was wrong, the temptation is either to adjust the assumption or manipulate the system based on it to make the outcome come out right.


Thus, if the system is failing to hit the service-level target, the easiest solution is to adjust the service level or stop the clock. Or if the ‘abandon-rate’ target is not met, the temptation is to adjust the average handle time – and then be surprised that call volumes increase, because more and more customers call back to pursue problems that were not properly fixed the first time round.


Far from leading to better understanding and hence improvement of performance, this manipulation of the assumptions just serves to further sub-optimise the system.


Worse, the assumptions get translated into apparently objective ‘data’ which seemingly show the organisation achieving the goals it has set itself. As it is passed up the line, the fudged ‘data’ becomes the basis on which more and more management decisions are made.


This is dangerous territory, and much more common than might be expected.


Please don’t misunderstand me. I’m not saying that testing a hypothesis is not important or useful. Quite the contrary, it’s a fundamental principle for a learning organisation.


A better alternative

So what’s the alternative to proceeding by assumption?


The first priority is to get clear about the real purpose of the system, from the customer’s point of view (‘outside-in’, in Vanguard parlance). The second is to adopt measures that are directly linked to this purpose.


In the case of a claims-type system such as motor insurance, measures might include the end-to-end time, from the claimant’s point of view, it takes to fully settle a claim, and the type and frequency of barriers to settlement. In many cases settlement delays create extra costs (in additional need for hire cars, for instance), so linking these measures to the actual consequences for cost is crucial.

If measures are unrelated to purpose, they will not only not reveal how well the system is doing what it is supposed to do, they will create a new, de facto purpose that distorts and sub-optimises the system, as above.


Measures to connect actions with consequences

It is also essential to learn how to capture and use actual data to learn what your system can predictably achieve and the variation within it, usually involving capability charts (for more on this see the Guide to Creating Capability Charts).


This will suggest opportunities for action that can be tested and measured to understand the relationship between action taken and consequences for performance. It provides a scientific and systematic means for identifying ways to improve performance, by connecting actions with consequences.


For example, in an arrears-type system, why and how often do customers go into arrears, and what kind of customers have done so in the past? Understanding why customers have fallen into arrears enables the organisation to make decisions based on data, not opinion or assumption, about where to act. For example, the underlying cause may often lie elsewhere in the system – for example, problems with statements or direct debits. In that case the obvious point of intervention would seem to be pro-active work to reduce the number of customers falling into arrears, and requiring expensive chasing, in the first place. Data over time will show the consequences of trying different prevention methods.


Ending the numbers game

Acting this way challenges the organisation to use its ingenuity to interpret and act on actual data, rather than spending its time developing elaborate assumptions that risk clouding rather than clarifying opportunities for improving performance. The predictable result is learning where and how to improve performance systematically and sustainably.


So ask yourself, how much of your organisation is spending its time playing numbers games that hide real performance, rather than helping you to learn?


This article appears in Edition One of The Vanguard Periodical: The Vanguard Method in Financial Services. Download it here

Comentarios


Ya no es posible comentar esta entrada. Contacta al propietario del sitio para obtener más información.
bottom of page