This site uses cookies. Learn more >>
 

Sales Models, Metrics, and Motions Blog

Forecast Accuracy: Mission Critical or Malarkey?

by Matt Bertuzzi on Fri, Oct 12, 2012

 
This post from Yesware CEO Matthew Bellows had me nodding in agreement the other day - Stop Guesstimating Your Sales Forecasts. In fact, this line had me shaking my fist at the fates:

The second reason for the sales manager's pain is that when it comes to gathering data about upcoming sales possibilities, companies and CRM systems rarely measure anything real. For most kinds of business- to-business selling, your CRM database is an outdated collection of anecdotes and guesses. The fewer the deals, and the longer the sales cycle, the less your "data" matches reality.

The stuff that does get accumulated in spreadsheets and CRM systems looks like data — there are dollar signs and probabilities next to prospect names — but it's not. It's really just the opinions, guesses, estimates and suppositions of your sales team.

Now that I’m reading Nate Silver’s new book about how even the best, brightest & most confident ‘experts’ are equally terrible at forecasting, I’m ready to open up my window and belt out a good old-fashioned “I'm as mad as hell and I'm not going to take this anymore!

A True Story

Just last week, a client asked me to help them calibrate the probabilities of their sales stages. It sounded like fun. I mean aren’t you curious about your own ‘Stage 2 – 50%’ opportunities? Of all the opps that reach that stage, are you really closing 1 out of 2?

(Note: if you don’t care how we did it, skip this part)
We ran an Opportunity History Report for all closed deals in the previous 2 quarters. We calculated the ultimate stage the deal reached prior to closed:won or closed:lost. Then we calculated the number of wins that resulted from 100 opportunities reaching those stages.

Tale of the Tape

Having done the analysis, we found that the probabilities in Salesforce.com were correct +/- 24%. Translation: they downright stunk.

Now as you eyeball the table above, it might not seem like too much of a variance. But using these calibrations, we re-ran a weighted pipeline analysis for the group and found …. An overstatement of ~19%.

An honest look & one report later, pipeline for the group had dropped by nearly 1/5th. Ouch.

So Where Do We Go From Here

Short version, I’m not sure.

Long version, I’m with Matthew Bellows:

So this is a call to innovative sales leaders, sales operations people, technology and service providers, and the top companies of the CRM industry. Let's build the processes, the services and the tools we need to collect data instead of opinions. Let's learn to build forecasts based on what we do instead of what we say.

And most importantly, let's help our salespeople succeed instead of weighing them down with processes that waste valuable time and money.

Your Turn

I’m most interested in what you think.

  • Sales Reps, how many hours do you send on your forecast monthly? 
  • Sales Managers, how much of your time do you spend collating, gut-checking & fiddling with data? 
  • VPs, let's say an algorithm could score probability (but with 10% worse results).
    Would you prefer a) your current forecast accuracy with dozens-to-hundreds of manhours monthy?
    Or b) would you take the 10% accuracy hit and give those hours back to the sales org?

------
Find Matt on Twitter and Google+ 

(Photo credit: Dade Freeman) 

Get the latest SDR, AE, and CSM insights in your inbox.

We're committed to protecting and respecting your privacy.
By clicking subscribe above, you consent to allow us to store and process the personal information submitted
to provide you the content requested.
 
Comments

What do you think?