2021 Customer Strategy & Planning On-Demand

Click on the sessions below to watch the Keynotes, Workshops & Technology Showcases. Full access to these videos is part of your Forum Membership. To access these register or log in using the My Account link at the top of this page. If you do not have the access you expect please email advice@theforum.social.

Gold Sponsor
Liquorice Consistents – Does somebody at Bassetts not like me?

Published on 11 May 2022

Liquorice Consistents – Does somebody at Bassetts not like me?

At our Customer Strategy & Planning Connections conference in Newcastle on 26th April ‘22 we explored probability using Liquorice Allsorts.

Phil Anderson has spent many a sleepless night worrying about Liquorice Allsorts. He worries about why they are Allsorts. Why are the packs different; why can’t we have Liquorice Consistents where every pack is the same? This then set him wondering whether the packs are truly random or are there any patterns? Most people would have kept these thoughts to themselves, but Phil decided to share these out loud and the next thing we knew we had 17kg of Liquorice Allsorts and an idea for a conference exercise. 

It was just after lunch and we decided to wake the room up with a little competition – the first person to bring me a Bertie Bassett sweet would win a prize. However, it turned out this competition was unfair to many in the audience.

  • Many didn’t have any Bertie Bassetts in their bag so didn’t have a chance! We based the figure on an average bag, but the bags are different. Average is not the same as fair.
  • Not everybody had a bag as there were fewer bags than seats at the table. We recognise that not everybody enjoys Liquorice Allsorts and we wanted to minimise waste. So, some people didn’t open a bag because they don’t like them, some people just didn’t get a bag and some just couldn’t be bothered. None of these had a chance of winning. Maybe they felt excluded – was that fair?
  • The competition was won by somebody on the front row, the people at the back had no chance. Was that fair? Many of these will have spotted the obvious flaw and decided not to participate.
  • I also noticed that the winner took the Bertie Bassett from the person next to them. We set a goal that was easiest to achieve by cheating – was that fair on everybody who played by the rules?
  • When we set the competition, we had a great prize on the table. We didn’t say that was the prize but we let people think it. The actual prize was another pack of liquorice allsorts. Was that fair?

The reality is that many of the errors we made that in turn made our competition unfair are mirrored in our own organisations when we set targets and incentives.

  1. Do we focus on the average person and ignore everybody else?
  2. Do our incentives exclude large parts of our workforce?
  3. Do we demotivate parts of our workforce by setting goals that they could never achieve?
  4. Do we set targets that drive the wrong behaviours?
  5. Do we offer unachievable incentives?

If we are not careful then targets and incentives can be counterproductive. We can take a lot of learning from this competition, but the main purpose of this session was to explore statistical variation.

We wanted to understand the distribution in the packets of Liquorice Allsorts. I also wanted to answer the question does somebody at Bassetts dislike me?

When we originally discussed the competition element, I was sceptical as I had never seen a Bertie Bassett and had no idea these existed. I put this down to the fact that I tended to buy the cheap knock off supermarket own brand Liquorice Allsorts. But when I was setting up the workshop, I bought 12 bags of the real thing. I opened 12 bags and didn’t get a single one. Was this a coincidence or were there other forces at play? We needed more data to better understand this. In fact, when Phil and I compared the average content of our packets of Liquorice Allsorts we saw very different distributions.

So, in order to understand what was happening, we needed a bigger sample and by getting people to document the contents of their packets we added 94 more data sets to our analysis. This is what we found…

The first thing I noticed was that no two people got the same packet every packet was unique. But also, nobody got the average packet.

This is interesting because we design so many of our strategies and plans for the average person but the average person doesn’t exist. For example, we could do a survey on whether people prefer to work from the office or at home. If 60% of employees say they prefer working from home and 40% say they prefer the office, then the average person wants to be in the office 2 days a week. Even though nobody in the survey said they wanted this. By trying to create a one size fits all approach we have found a solution where one size fits nobody.

Now, when we did the analysis, I looked at the variance in my data and this allowed me to calculate the margin of error. (You can learn the technique I used to calculate these in our Descriptive Statistics Learning Modules).

As you can see, even with 118 data sets the margins of error shown by the white bars are still significant. If we were to repeat the experiment again, we would get different answers, but we would expect them to fall somewhere within these bars.

Ignoring the margin of error is a trap that we can often fall into. For example, we may target people or be targeted based on league tables where the top performers and bottom performers are separated by less than the margin of error and random chance hides the true performance.
When we look at individual packets, however, we see that the margin of error bars are now huge. We may get none of some types of sweets or we could get a cluster.

But even with these large error bars we still saw a significant number of anomalies where people got even more of certain types of sweet. This is perfectly normal.

We can never be 100% confident in a statistic. And I have used a 95% confidence level for my error bars. Which means I will still be wrong 5% of the time. Or one in 20. Given that we can’t have less than zero of any type of sweet that halves our potential errors but we would still expect 1 in 40 data points to be an anomaly.

There are 14 different types of sweet which could end up in a packet and we have opened 118 packets. This gave me 1,652 data points. If expecting 1 in 40 to fall outside the margins of error this means we should expect over 40 anomalies. 

In our own roles we can sometimes fall into the trap of putting too much emphasis on anomalies. We need to take a moment to think about how many measures we have, how often we measure them and how many departments, teams and individual we measure. We may have thousands if not millions of data points, there will be anomalies everywhere. If we focus on every isolated anomaly, it can distract us from the data that really matters.

Which brings me back to my Bertie Bassett example. In the data although the average pack had 1 Bertie Basset and almost half the packs had none. Using the probability for having no Bertie Bassetts in a single bag I was able to calculate that the odds of me having none in any of my bags was 5,000 to 1. It appears that somebody at Bassetts really doesn’t like me. When I opened a 13th bag on stage and I got none the odds of that were 10,000 to 1, or was it?

Actually, the odds were 50/50, we can easily fall into the trap of assuming that chance somehow knows what has happened before and will balance things out. For example, gamblers may look for patterns in roulette table or lottery numbers, yet the odds remain the same regardless of previous outcomes.

I had also made the mistake of assuming that the distribution was completely random. But what I saw in the data were more clusters than I would expect with a normal distribution. Maybe when filling the bags, the sweets haven’t been thoroughly mixed. All of my first 12 bags came from the same box so it is likely that they were filled at a similar time. Perhaps my bags were filled from a part of the hopper that the Bertie Bassetts hadn’t been stirred into.

As analysts we focus on averages but sometimes we can learn far more by looking at the distribution.
For example, when we look at average handle times for calls or tasks, we often work on the assumption that the distribution is normal and looks something like this with any fluctuations being random.


But the reality can be a lot different with a longer tail perhaps caused by more complex enquiries and often an early peak.

These patterns tell us a great deal. For example, it is not possible to deal with a call or enquiry in that time, so something else may be happening. 
Perhaps calls or work are going to the wrong place. Perhaps calls are being disconnected. These are all problems that are hidden by averages.
Likewise, as we start to address the problem averages can lead to further incorrect assumptions. Many contact reduction initiatives are targeting these early peaks, removing those unnecessary or misdirected contacts. This can be really valuable but we should not assume these are average calls. We are removing the short calls and when we do that the average of the remainder will increase. If we don’t recognise this then we can find ourselves under-resourced and setting unrealistic productivity measures.

So these are just a few of the things that we can learn from a packet of Liquorice Allsorts, but maybe there is more, so we will be collecting further data. However, it will be while before I will want to eat another packet of Liquorice Allsorts so we need your help. If you ever open a 165g packet of Bassetts Liquorice Allsorts take a moment to sort these and send us a picture. Either post it in the thread below or email it to info@theforum.social 

At the conference I also got requests for the data from people that wanted to do their own analysis, so here it is. Please have a go and share your findings.

Liquorice Allsorts Data

Comments (0)Number of views (6563)
Print