Member Resources & Library

Enjoy a trusted place where you can easily find solutions. See below for a sample of our articles and resources. Register and log in above to see a wider range. Forum Members have access to the full library by logging in. If you see a message that says an article does not exist, try logging in to view it.
If you do not have the access you expect or if you need help with your password, please email advice@theforum.social.

How can we forecast with no data?

Published on 27 April 2021

How can we forecast with no data?

Typical forecasting models rely on historic data (‘time series’) or established causeand- effect models (‘explanatory models’) for key drivers, like sales forecasts, time to-competency etc. In ordinary times, we understand our key drivers and can use historic data to test our models. Adjustments can be made to historic data to take out outliers, by ‘normalising’ or cleansing the data sets. Yet, at other times, the data we have is just not the right place to start.

This is the case this year, when assumptions on which we’ve traditionally built our predictions are no longer true, due to the pandemic. This week may no longer be like last week or this April won’t follow the pattern of previous Aprils. Likewise, we make assumptions based on customer and employee behaviour. Yet, working patterns for our customers, and our own processes and systems, have changed significantly over the last year and will continue to change for the foreseeable future.

As organisations we have been in this situation before. When we first opened an operation or a new channel, we had no historical data. Yes, it was challenging but we overcame the challenge to be where we are today. When we do something for the first time, we don’t have any directly comparable historic data. With the right approach, however, there is still a lot we can achieve. Again, there is more detail on this in our ondemand Learning Academy modules.

The wisdom of crowds

In the 4th Century BCE, Aristotle said “it is possible that the many, though not individually good men, yet when they come together may be better, not individually but collectively”. This wisdom of crowds is the foundation of democracy and trial by jury. Over the years, this principle has been tested statistically. 

For instance, at a 1906 country fair in Plymouth, 800 people participated in a contest to estimate the weight of an ox. Statistician Francis Galton observed that the median guess, 1207 pounds, was accurate within 1% of the true weight of 1198 pounds. The principle is that the outliers on one side start to cancel out the outliers on the other. So, in theory, if we were to ask enough people to make an educated guess on how many customers will contact us next week then the result should be close to reality.

Like many analytical techniques, this can be the subject of bias and faulty data. Any output is only as good as the information on which analysts base their estimate. If a large part of your audience has beliefs based on incorrect information the wisdom of the crowd will still be wrong. So, for the wisdom of the crowds to work, we will need to use a well-informed crowd that is not being influenced by external factors. To use this technique, we need to find ways to remove or mitigate misinformation and influence.

One recent approach to tackling this is Dialectical Bootstrapping. This is where people are asked to make two estimates, the first is based on their own assumptions, they then discuss their estimate with somebody with conflicting views and make a second estimate based on the assumption of others. The theory being that the average of the two estimates will be closer to the truth.

An alternative to this is called Surprisingly Popular, developed by Scientists at MIT where people are asked to give two estimates, first what they think and secondly what they assume others will think, with the average of the two being the most accurate. The power of this approach is that it gets people to recognise there are views different to their own and consider these.

The Delphi method

This brings us onto the Delphi Method, a structured communication technique which brings together many of these principles. It is a way of forecasting when you have no reliable historic data or model. Instead, we ask a panel of subject matter experts to write down what they think will happen and why, documenting their assumptions.

A facilitator collates this into a single document and shares this with the subject matter experts, they then revise their assumptions taking into consideration the other opinions and document again. The facilitator again compiles and shares. These steps can be repeated an agreed number of times or until consensus is reached. The forecaster then uses this model to build a forecast.

The key elements are: anonymity of participants, structure of information flow, regular feedback and a specialist facilitator. Judgmental forecasting is subjective, so it comes with limitations and we need to manage these with systematic and well-structured approaches. This will be key to forecast accuracy and usefulness. In particular, the Delphi process is designed to be anonymous, avoiding undue influence and reducing bias. That way every idea is judged on its own merit and not by the level of influence. You can also apply the Delphi approach in meetings (‘mini delphi’) but you need to be careful to avoid this undue influence by some individuals, for instance ‘the Hippo’ (highest paid person in the room) or the most forceful or the most ‘expert’ or the one who talks the most.

For the Delphi Method to be successful you need to capture expertise at every level of the organisation from frontline advisors to senior leadership; they all have knowledge that will enhance the forecast. If we gather people in one room, how will a front-line advisor tell a director that they are talking rubbish? Or how do you avoid people automatically taking a contrary view, as in politics, because they choose to disagree with the person saying it?

The Known Unknowns model

No matter how well we plan, there is the risk that something unexpected will happen. Mostly, things are more predictable than we think, but there is always something to surprise us, as the year of the pandemic has well and truly demonstrated. Our famous Known Unknowns model at The Forum is a tool to help you remove potential blind spots. You can use it to plan for unprecedented situations, but also to learn, in regular reviews cycles. Remember, each time something unexpected happens it becomes ‘known’. If we learn about it, it shouldn’t catch us unawares a second time. You need to capture this information in in a way that makes it easy to use.

Three questions help us map out the matrix. What might happen? When will this happen? What will the impact be? Our blind spots are the Unknown Unknowns, in the bottom left of the matrix. We have no idea what these things are, so they could hit us like a dinosaur-killing asteroid out of a clear blue sky. As we move along either axis, we learn more about either the impact of the event or the timing (when it’s going to happen). These are the Known Unknowns. We know something about these drivers but not everything. The things we can be most confident in planning for are in the top right. These known knowns are the areas in which we should be able to be most prepared. You may not have all the answers but by involving as many people as possible in thinking about this you can become more aware of what you don’t yet know. Using this to kick off the Delphi process can really focus us, potentially cutting the number of subsequent iterations.

Capture learning to build reliable data

These approaches can be combined to improve the quality of the data we can capture, and our understanding of it. The first step, for a scientific approach, is to create hypotheses to test. These may be time patterns or cause-and-effect – eg Mondays are 20% busier, mailings generate digital contact as well as phone calls, etc. If you have relevant historical data, you can use this to test the hypothesis and loop that insight back into the process. If you don’t have any data yet, then you will need to start off forecasting based on the agreed assumption, using the methods above. The key is to start collecting the data that will allow you to test this. Then review and revise your assumptions once you have sufficient historical data with which to do this.

Author: Ian Robertson

Date Published: 27th April 2021

This article was first published in the 2021 Best Practice Guide - Unlocking Opportunities: You are the Key

To download a full digital copy of the Best Practice Guide, click here

Comments (0)Number of views (493)

Author: Leanne McNamee

Categories: Library, Data, Analytics & Insight

Tags:

Print