Long-term Forecasting: Test your model for accuracy


Forecasting is as much an art as a science. We rely heavily on our past knowledge when creating a forecast; what does a Tuesday look like on average?  What usually happens in the third week of the month? How does next month look in our seasonal trends?  Hopefully we will have a good relationship with other departments in the company so we will know about the marketing that is being mailed out next week or the advert that will be shown during Coronation Street with the new offer.   Or, IT will have told us that the new system is going to go live in three weeks and we have calculated that AHT will rise for a few weeks but in the end will drop as agents get to grips with it. When a new product line is launched we might not have a call history to rely on but we may have a similar product or support line that we already have the data for and we can use this as a template for the forecast.
 
But inevitably something will happen that we have not planned for.  The recent weather has been unprecedented and while the relevant agencies such as police and transport will have plans in place for weather related incidents a lot of these will have been for more short term incidents. Not many would have planned for floods that have already lasted almost two months with no end in sight for the near future.
 
It would be easy for us to defend our forecast by saying that we could never have planned for that but we need to learn after the event. We may tag the data so that we can include or exclude it in the next relevant forecast. For example, data from the call patterns during the London Olympics may give us some idea of how the Commonwealth Games in Glasgow may affect calls in July but we would have excluded this data from the 2013 forecasts.
 
But it is also important that we check how accurate our models are for forecasting. For example, what if in the last month we had a mailing go out that we were not aware of or a storm meant a power outage in the area? We had not planned for these in the forecast so unfortunately it was not as accurate as we liked. To test the model we reverse engineer it. Take your model and put the data from last week into it as if you are running a forecast but you are now entering the events that you did not know about. If you had have been aware of these events in advanced how accurate would your forecast have been? If you had tagged the data in my WFM system would it have given you a closer forecast? Hopefully the answer is yes but if not we need to look at the model and see why not so that we can improve in the future. So often models are passed down in the team without much review and we just accept what is coming out of it. On our WFM system have we accurately reflected changes to the telephony system? I sometimes hear people say they don’t feel comfortable with the forecast that comes out of a WFM system but is that because the correct data is being gathered in the first place?
 

Regular review and reflection can help our forecast accuracy and therefore our credibility with the centre.

Alison Conaghan,
Professional Planning Forum,
alison.conaghan@planningforum.co.uk

This article on Long Term Forecasting forms part of a Resourcing Planning Top Tips newsletter produced by the Professional Planning Forum. If you would like to receive this regularly please sign up here.

print
Take a look at these articles, videos and links
See the latest articles on Forecasting


 

#
(c) Professional Planning Forum 2000-2019
This site uses cookies to help deliver an engaging user experience.
To learn more about what cookies are and how to manage them visit AboutCookies.org