Current Articles | RSS Feed RSS Feed

Forecast Accuracy: Mission Critical or Malarkey?

Posted by Matt Bertuzzi on Fri, Oct 12, 2012

This post from Yesware CEO Matthew Bellows had me nodding in agreement the other day - Stop Guesstimating Your Sales Forecasts. In fact, this line had me shaking my fist at the fates:

The second reason for the sales manager's pain is that when it comes to gathering data about upcoming sales possibilities, companies and CRM systems rarely measure anything real. For most kinds of business- to-business selling, your CRM database is an outdated collection of anecdotes and guesses. The fewer the deals, and the longer the sales cycle, the less your "data" matches reality.

The stuff that does get accumulated in spreadsheets and CRM systems looks like data — there are dollar signs and probabilities next to prospect names — but it's not. It's really just the opinions, guesses, estimates and suppositions of your sales team.

Now that I’m reading Nate Silver’s new book about how even the best, brightest & most confident ‘experts’ are equally terrible at forecasting, I’m ready to open up my window and belt out a good old-fashioned “I'm as mad as hell and I'm not going to take this anymore!

A True Story

Just last week, a client asked me to help them calibrate the probabilities of their sales stages. It sounded like fun. I mean aren’t you curious about your own ‘Stage 2 – 50%’ opportunities? Of all the opps that reach that stage, are you really closing 1 out of 2?

(Note: if you don’t care how we did it, skip this part)
We ran an Opportunity History Report for all closed deals in the previous 2 quarters. We calculated the ultimate stage the deal reached prior to closed:won or closed:lost. Then we calculated the number of wins that resulted from 100 opportunities reaching those stages.

Tale of the Tape

Having done the analysis, we found that the probabilities in were correct +/- 24%. Translation: they downright stunk.

Now as you eyeball the table above, it might not seem like too much of a variance. But using these calibrations, we re-ran a weighted pipeline analysis for the group and found …. An overstatement of ~19%.

An honest look & one report later, pipeline for the group had dropped by nearly 1/5th. Ouch.

So Where Do We Go From Here

Short version, I’m not sure.

Long version, I’m with Matthew Bellows:

So this is a call to innovative sales leaders, sales operations people, technology and service providers, and the top companies of the CRM industry. Let's build the processes, the services and the tools we need to collect data instead of opinions. Let's learn to build forecasts based on what we do instead of what we say.

And most importantly, let's help our salespeople succeed instead of weighing them down with processes that waste valuable time and money.

Your Turn

I’m most interested in what you think.

  • Sales Reps, how many hours do you send on your forecast monthly? 
  • Sales Managers, how much of your time do you spend collating, gut-checking & fiddling with data? 
  • VPs, let's say an algorithm could score probability (but with 10% worse results).
    Would you prefer a) your current forecast accuracy with dozens-to-hundreds of manhours monthy?
    Or b) would you take the 10% accuracy hit and give those hours back to the sales org?

Find Matt on Twitter and Google+ 

(Photo credit: Dade Freeman) 


Matt - we're using a super cool analytics platform called InsightSquared that runs all the numbers for us automatically. They are in your neck of the woods. The Insights derived from the analytics have caused us to make changes to percentages associated with sales stages. Because we have a rigorous sales process we were closer than these guys, but still off. They also helped us to realize that certain web forms are more valuable than we ever expected. Our sales team now follows up faster on buyers who fill out certain forms which should come out as more revenue at the bottom of the funnel. 
I'm not saying analytical tools alone are the answer. But analytical tools combined with some decent interpretation and a rigorous sales process that is closely followed and managed can help a lot.

posted @ Friday, October 12, 2012 8:42 AM by Steve Richard

Awesome, Steve. 
Have you found any time saved in forecasting (rep, managers & up)?

posted @ Friday, October 12, 2012 8:52 AM by Matt Bertuzzi

The team who sells/delivers training programs is only 6 so there has always been a high degree of pipeline visibility by virtue of the fact that we collaborate constantly.  
It's not so much time to forecast that has improved as forecast accuracy and knowing the exact shape of our funnel. We were close before, but now we can say with 99% certainty that 12.5% of our initial discovery phone appointments will close.  
It also makes us less reactive and more proactive. For example one of our guys had 18 initial discovery phone appointments in Sept. Once we realized that only 3 had converted to the next stage (Needs Assessment), it raised a red flag. That rep and his manager can now develop a plan for moving more of those stage 1 opps into stage 2. I think the feeling of being on top of it and having the data to back it up is the biggest win for us.

posted @ Friday, October 12, 2012 9:12 AM by Steve Richard

Spot on.

posted @ Friday, October 12, 2012 10:58 AM by David Pier

I agree with Matthew Bellows. Analytics should be driven by real-time, key sales activities completed in a given sales cycle, not instinct or gut. It's the only way to assess your process, talent and marketing assets objectively, and the only way to improve effectively.

posted @ Friday, October 12, 2012 1:25 PM by Jessica Cornell

Amen, brother. Most sales forecasts rely on process integrity over data integrity. Meaning we strictly follow a weekly cadence but fail to answer the right sorts of questions. This is not a hard problem to solve but requires a different mindset. Keep preaching the gospel!

posted @ Friday, October 12, 2012 2:58 PM by Michael Liebow

Sellers are under pressure to forecast at least quota. Over time SFDC should be able to figure out how much to discount each step of the pipeline based upon a rep's historical forecasting but it's a workaround of a flawed approach. 
The fundamental issue is that CRM collects seller opinions. Inaccurate data will result in inaccurate forecasts. Unless and until organizations build pipelines based upon buyer reactions, forecasting will continue to be a seller's wishes (dreams?) versus reality.

posted @ Friday, October 12, 2012 3:08 PM by John Holland

Thanks everyone for the comments. @John Holland brings up an interesting point. 
If CRM captured buyers' opinions + buyers' steps (vs. sellers' opinions + sellers' steps), would forecast accuracy improve?

posted @ Friday, October 12, 2012 3:20 PM by Matt Bertuzzi

Great discussion. 3 quick points: 
1) @Steve, thanks for the kind words about InsightSquared. 
2) The thrust of this post is spot on. I wrote something similar this week as well. Few companies are as in touch with their probabilities as Steve is. The vast majority of teams I've spoken to, still have 25%, 50%, 75%, 90% set up by force of habit or inertia. 
3) @Matt Bertuzzi, to your question about capturing information about the buyers vs. the sellers, I'm with you, but would phrase it a bit differently. I'm not a sales leader, but in building InsightSquare's product, I've talked to a lot of them. I asked one very successful inside sales leader what he would change about his process and he went on a discourse about how he has modeled his opportunity stages. He wanted to rip up his current ones and replace them with stages that reflect actions that the buyer has taken themselves. Not what the sales rep has done or thinks is happening. Not things that the buyer has said (because talk is cheap). The opportunity stages need to reflect actions the buyer has taken. That moves things far closer to being quantitative and objective and away from being quantitative yet subjective.

posted @ Saturday, October 13, 2012 7:53 PM by Josh Payne

In my experience as a sales rep, I would usually rely on historical data generated by the analytic tools, then manually combine it with previous historical numbers that did not show up in the most recent list. A good CRM should be more efficient in doing this for me and I look forward to something that would give me an accurate "gut-feel" reliable number.

posted @ Friday, October 19, 2012 3:04 AM by Ayeen Benoza

Awesome post. I love the way you presented it. As a sales rep, I spend as little time as possible on forecasting my sales. I know the opportunity cost: valuable closing time. Great post and I look forward to the next one!  

posted @ Friday, October 26, 2012 3:41 PM by Jon Birdsong

Hi Matt, interesting post. 
We use to manage our forecast pipeline. I am now focussing on building a better forecast by having salesmen commit to a rolling 3 month forecast categorized by commit/best case/pipeline. My intention is to measure accuracy over time and see how we can improve. BUT I can not find any standard tools in or in app store to "measure" historical accuracy - ie - how well do Jan estimates for Jan/Feb/March correlate with actual sales one, two and three months down the track. I see so many claims by salesforce and others that Salesforce increases forecast accuracy by xx % - how do they measure such claims?

posted @ Friday, February 15, 2013 4:42 AM by David

@David - You are right - there isn't any out of the box way to measure prediction vs. actual. 
The issue is that, in theory, opportunities are *live* and constantly being updated. 
One option is to pick a point in the month/quarter that you want to measure accuracy for: the 15th, 3 days before month end, etc. 
Once you have that, you could have your admin build out some Opportunity Snapshotting to capture the point in time prediction and report against actuals.

posted @ Monday, February 18, 2013 6:15 AM by Matt Bertuzzi

Post Comment
Website (optional)

Allowed tags: <a> link, <b> bold, <i> italics