
I was on a call with a prospect a while ago whose company is struggling with demand volatility. To address the issue, the company is implementing a statistical forecasting tool. I must admit that it took me some time to convince the prospect that implementing a forecasting tool isn’t going to reduce the volatility. While I accept fully that one’s ability to create an accurate forecast is related to demand volatility, I am adamant that an accurate forecast does not reduce demand volatility.
Demand volatility is an expression of how much the demand changes over time, and, to some extent, the predictability of the demand. Forecast accuracy is an expression of how well one can predict the actual demand, whether volatile or not.
Why do I make this distinction? Bear with me while I go through some personal history as a way of explaining the importance of the distinction. This goes back to my days as a graduate student at Penn State, back when they were still winning national championships!! I had studied chemical engineering as a undergrad and had moved on to industrial engineering and operations research.
Chemical engineering, like most engineering, is a mathematically rigorous course in which complex equations are used to predict the behaviour of complex systems such as a distillation column. The equations are precise and outcomes can be predicted to the umpteenth decimal point. No-one questions the validity or accuracy of the equations used (though there is continued research to “improve” the equations).
In other words there is an implicit understanding that the equations are not accurate to the umpteenth decimal point even though people calculate to that level of accuracy. But the equations are sufficiently accurate to design chemical plants. When it comes to actually running a chemical plant, all sorts of control systems are placed around the equipment to make sure that the plant operates in a “stable” manner.
There are feedback loops and feed-forward loops, and controllers that control other controllers. It is a very interesting study for those, such as me, who like those sort of things. My main point is that these systems are assumed to be highly predictable and that their behaviour can be described very precisely by a set of equations. These are so-called deterministic systems.
According to Encarta, the definition of deterministic is de•ter•min•is•tic [ di tùrmi nístik ] (adjective) Definition:
1. relating to determinism: relating to the doctrine or belief that everything, including every human act, is caused by something and that there is no real free will
2. of knowable outcome: having an outcome that can be predicted because all of its causes are either known or the same as those of a previous event
Clearly, I am referring to the 2nd definition, though the 1st suits my purposes very well too, because I want to bring in the element of free will, which leads to unpredictable behaviour, or volatility. Yet, nearly all of social and business systems are based upon the notion of predictable behaviour. So I had studied chemical engineering and moved into industrial engineering and operations research at the graduate level. A core requirement was queuing theory. If you don’t know what that is, don’t worry you are one of very many.
One of the first things the lecturer told us was that if there was a person checking ID's at the entrance to a bar (I was at university at the time!) and it took about 1 minute to check an ID. If people arrived at the door about 1 person every minute, how long would the queue/line be in front of the person checking ID’s. Fancying myself as quite clever, I raised my hand immediately and replied that on average there wouldn’t be a queue. Sounds reasonable right. After all people are arriving about 1 per minute and it takes about a minute to check ID’s, so the system is balanced. Wrong.
The queue will grow indefinitely. After years of the deterministic world of chemical engineering I just could not accept this. After some "field" research I had to accept it, but I still did not understand it. And finally the penny dropped. Once the person checking the ID gets behind, that is a queue forms, there is no way to catch up. It will still take about a minute to check ID’s and people are still arriving about 1 per minute.
So how is the person going to catch up? The real clue to understanding this is that any time the person checking the ID’s sits idle, because there is no-one's ID to check, is lost forever and cannot be put to productive use. So the available time to check ID’s is actually less than a minute. For those brave enough, here is a reference. So what’s this got to do with supply chains?
Let's be honest, any lead time that is put into an ERP system is an average (at best) or an estimate (at worst). The same is true of production rates and scrap rates. Yet we spend enormous amounts of time and energy fine tuning MRP and APS systems to provide better results, to the point that the results are more accurate than the input data.
(I know that is a contradiction.) But how many times have we heard "garbage in, garbage out" when referring to ERP systems, or other planning systems, and the underlying input data. Well, hello! We’re trying to fix the wrong problem.
There is so much uncertainty related to so many variables in the supply chain that simply having a more accurate representation of the average value of an input variable doesn’t really solve the problem.
I am not questioning the value of accurate information. What I am questioning is the value of spending lots of time and effort to make the inputs very accurate when they are only ever going to be approximations because of the inherent uncertainty in supply chains. On top of it all, so much of supply chain processes – order taking, purchase order issuing, … - is carried out by human beings that we haven’t a hope of creating metronomic repeatability.
Humans are far better at dealing with uncertainty than are machines, but they are also a lot less predictable than machines. Let us embrace their capabilities rather than turning them into machines. Let us give the humans tools in which they can use their judgment, in the face of uncertainty, to evaluate different courses of action quickly and effectively. Which brings me back to the prospect I spoke to a few days ago.
Having a more accurate forecast isn’t going to remove the volatility/uncertainty from the demand. Having a more accurate forecast isn’t going to help the supply side to deal with the volatility on the demand side.
The supply side is still going to have to be agile and flexible to adjust to the demand changes. And I don’t care how much time and effort is put into statistical forecasting, in a dynamic market with lots of volatility, the forecast will always be inaccurate. So the question is where should one spend time and effort. In making the plan as accurate as possible, including all the input data, and forever analysing why the actuals didn’t match the plan? Or in accepting that there is a lot of uncertainty in the supply chain and devising ways to respond quickly and effectively to the change?
Clearly it is important to be able to get a fairly good understanding of the results of alternative decisions, but a quick approximate answer will always be better than a slow more accurate answer, simply because the uncertainty inherent in the supply chain will drown out the “accuracy” of an optimized result. I know there are a lot of Lean and SixSigma people out there who must be frothing at the mouth. So let’s hear your comments and rebuttal of my arguments.