Now that I’m reading Nate Silver’s new book about how even the best, brightest & most confident ‘experts’ are equally terrible at forecasting, I’m ready to open up my window and belt out a good old-fashioned “I'm as mad as hell and I'm not going to take this anymore!”
A True Story
Just last week, a client asked me to help them calibrate the probabilities of their sales stages. It sounded like fun. I mean aren’t you curious about your own ‘Stage 2 – 50%’ opportunities? Of all the opps that reach that stage, are you really closing 1 out of 2?
(Note: if you don’t care how we did it, skip this part)
We ran an Opportunity History Report for all closed deals in the previous 2 quarters. We calculated the ultimate stage the deal reached prior to closed:won or closed:lost. Then we calculated the number of wins that resulted from 100 opportunities reaching those stages.
Tale of the Tape
Having done the analysis, we found that the probabilities in Salesforce.com were correct +/- 24%. Translation: they downright stunk.
Now as you eyeball the table above, it might not seem like too much of a variance. But using these calibrations, we re-ran a weighted pipeline analysis for the group and found …. An overstatement of ~19%.
An honest look & one report later, pipeline for the group had dropped by nearly 1/5th. Ouch.
So Where Do We Go From Here
Short version, I’m not sure.
Long version, I’m with Matthew Bellows:
I’m most interested in what you think.
- Sales Reps, how many hours do you send on your forecast monthly?
- Sales Managers, how much of your time do you spend collating, gut-checking & fiddling with data?
- VPs, let's say an algorithm could score probability (but with 10% worse results).
Would you prefer a) your current forecast accuracy with dozens-to-hundreds of manhours monthy?
Or b) would you take the 10% accuracy hit and give those hours back to the sales org?
(Photo credit: Dade Freeman)