Culture Magazine

Forecasting For a Favorable Miss

By Realizingresonance @RealizResonance

null

Photo courtesy of iStockphoto.

Humans excel at judgmental errors by our very nature. Cognitive biases are a fact of the human condition, afflicting even the most intelligent amongst us. The disciplines of psychology and behavioral economics have uncovered a vast array of biases, with over 100 distinct flavors of cognitive dissonance listed on Wikipedia (“List of Cognitive Biases”). Several of these biases are of concern to the professional forecaster, who must not only take steps to prevent these from distorting their predictions, but also contend with forecast consumers who may hold biased evaluations of forecaster performance. A textbook I have on forecasting suggests steps to avoid the problems of inconsistency bias, conservatism bias, recency bias, availability bias, anchoring bias, illusory correlations, search for supportive evidence, regression effects, attribution bias, optimism bias, underestimating uncertainty, and selective perception (Madridakis, Wheelwright, Hyndman 500-501). It is a good rule of thumb for forecasters to overcome biases. Nevertheless, my experience as a practitioner of foresight has taught me that avoiding bias in forecasting is not always so clear cut.

Sometimes meteorologists intentionally bias their own weather forecasts. Why do they do this? One of the earliest articles I ever posted on the Realizing Resonance Philosophy Blog was titled, Favorability Bias in Forecasting. In it I claimed that, “favorability bias is the tendency to view a forecast with greater criticism when the actual results are unfavorable to a preferred outcome than if the forecast misses by an equivalent amount in a favorable direction.” Favorability bias is a phenomenon that I have experienced in the world of business forecasting, and to explain this I used the hypothetical analogy of precipitation predictions and our preference for which side they end up erring on. This is what I wrote back in November 2010:

“Where I live, in the Seattle area, people often complain that the weather forecasts are less than accurate. But the degree to which one becomes irritated with an incorrect weather report depends on what the weather is actually like. Would you rather get a weather forecast that predicts a rainy day that turns out to be sunny, or a sunny day that turns out to be rainy? I suspect most people would prefer the former option, because most people generally prefer dry days over wet ones.” (Endicott)

It turns out that my example of forecasting the rain is not just a thought experiment, but is actually a real issue that meteorologists take into account. I am reading superstar predictioneer Nate Silver’s new book, The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t, and in the chapter on weather forecasting he describes a phenomenon called “wet bias.” When the publicly instituted National Weather Service forecasts a 20 percent chance of rain it actually does rain about 20 percent of the time, but when the private Weather Channel forecasts a 20 percent chance of rain it actually rains only about 5 percent of the time. The Weather Channel does this intentionally because their customers react negatively if they get rained out when they expected sun, while they feel jubilant about a sunny day that was supposed to be rainy. Also, for-profit meteorologists will bump a 50 percent chance of rain up to 60 percent or down to 40 percent, to avoid the appearance of indecisiveness. (Silver 133-136) Just as I suspected, although anticipating a favorability bias in customer evaluations of weather forecasts is probably a fairly intuitive assumption to make.

Silver (134) identifies the implication that, “forecasts ‘add value’ by subtracting accuracy.” At first this seems counterintuitive, because we should want the most accurate forecasts possible. The problem is that accuracy can only be judged in hindsight, while the value of a forecast is its usability in advance of the time when accuracy can be judged. Consider Cassandra, who correctly foresaw the trap of the Trojan Horse, but who was cursed by Apollo such that no one would listen to her. It doesn’t matter that Cassandra was perfectly accurate because she lacked credibility, peddling predictions that had no usability, no value. Credibility stands in as a proxy for anticipated accuracy. Philosopher of forecasting Nicholas Rescher (122) explains that a prediction’s ”eventual correctness, though crucial in the larger scheme of things, is an effectively impractical demand for all practical purposes. Precisely because it can be determined at the time, it is credibility that is the cardinal predictive virtue.”

I had the pleasure of learning about forecasting under the tutelage of Hans Levenbach, founder and President of Delphus, Inc., and facilitator of the Certified Professional in Demand Forecasting (CPDF) training workshops. Hans taught me that credibility is a forecaster’s livelihood, and no amount of economics, statistics, mathematics, psychology, or computer science knowledge will help a forecaster without credibility (SWIFT 16). Forecasters have strong incentives to maintain credibility, given its prime importance, and although credibility should be an indicator of accuracy there are situations in which incentives will be at odds with accuracy.

Value can be added to a forecast by subtracting accuracy in certain circumstances, since there are biases that distort the relationship between credibility and accuracy. The wet bias results from distorted market incentives in weather forecasting due to a consumer propensity toward favorable actualities in their evaluation of accuracy. Since credibility is the cardinal virtue for predictive utility, not accuracy directly, marketable forecasts must conform to the biases of the forecast consumers, and accuracy is subordinated to the need to maintain perceived rather than actual reliability, and thus satisfy a consumer demand for bias.

Favorability bias can also affect the evaluation of forecasts in the business environment. This is not due to an inherent irrationality on the part of business managers and executives, but due to an asymmetry in evaluative focus. Consider a year in which forecast errors are fairly minimal and unbiased for the first eight months, but then there is a large favorable miss for September. This will likely bring attention to the serendipitous numbers and require an explanation. Why did the numbers improve? Can we expect this to continue? Is this too good to be true? The attitude toward a big favorable miss is inquisitive, but also cautiously optimistic.

On the other hand, if there is a large unfavorable forecast error in September we have a different scenario. Why did the numbers tank, what are the exact drivers? What action items will get the numbers back on track? Oh and by the way, why did the forecaster not see this coming? The attitude toward a big unfavorable miss is inquisitive, but instead of cautious optimism there is worried skepticism and a demand for a more stringent level of evidentiation. Not only is there a large forecast error, the fact that the realized results were unfavorable taints the evaluation of the forecaster with the negativity of the poorer than expected operational performance, not just the forecast performance. This can distort the incentives of a business forecaster, who must subordinate accuracy to maintain credibility, thus infecting the forecast process with a bias toward favorability and sandbagging.

Catering to favorability bias presents a bit of a dilemma for a business forecaster. Although it can help maintain credibility in certain contexts it can also damage credibility in others. If it becomes perceived that predictions are being sandbagged or hedged then the forecaster may instill a sense of distrust in those co-workers who need unbiased forecasts as factor inputs for their own predictive models. This could result in problems like the bullwhip effect, with biases and improper adjustments compounding down the line to drive wasteful planning decisions. Intentionally underforecasting market demand so as to ensure that the sales team beats their targets can also result in a shortage of supply to meet that demand when the same hedged forecast is used to stock products. Biased forecasting processes can also lead to biased analysis and an incorrect understanding of the real drivers of results. Perhaps the worst scenario is that the motives of biased forecasters could be called into question, with a sense that they are more concerned about job security than professional integrity.

In my last article on this subject I recommended that forecasters should resist the pressure of favorability bias, but that “conservative” predictions are warranted under conditions of high uncertainty. I also suggested that political considerations about how to present a forecast might be desirable, but need to be insulated from the actual forecasters as much as possible. It should be left to managers to decide whether to overlay a bias on a forecast. Decision makers may intentionally bias a forecast because the cost of error may be greater for an unfavorable miss, and the judgment here is to prudently hedge a prediction in order to avoid a potentially higher cost. Biases of these sort can save on costs, but also backfire and trigger cost overruns. Intentional biasing is inherently risky, and as Paul Goodwin has pointed out in the article Taking Stock: Assessing the True Cost of Forecast Error, it is especially risky when decision makers do not know the full costs related to forecast errors. Favorability bias should be used conservatively, even by managers.

Still, there may be cases where a favorable bias is desirable and might be explicit to the forecasting process because it adds value, such as the wet bias in weather predictions. In the article, Accuracy versus Profitability, Roy Batchelor demonstrates that forecasters can add value by sacrificing accuracy in certain circumstances, such as trading T-Bills. It can be very profitable for a trader to absorb long runs of highly inaccurate forecasts if the market exposure this provides eventually results in one accurate prediction that brings a large net profit. This strategy of bearing inaccuracy can be very effective when anticipating interest rates movements in the T-Bill market. Batchelor suggests that forecasts be evaluated on profitability instead of accuracy, or at least alongside accuracy.

The interesting idea of adaptive bias suggests that something like favorability bias could have developed in humans through evolutionary forces. Selective pressures may have elevated the life savings benefits of caution by not eliminating the number of cognitive mistakes we make, just the really big mistakes that lead to our early demise (“Adaptive Bias”). Evolution may have given humans a propensity to feel more comfortable with forecast models that are biased toward safety so that many small favorable errors over time are worth the hedging if it avoids even one large unfavorable miss.

The statistical process of hypothesis testing is a bit like conducting a criminal trial. With statistical tests the default assumption, or null hypothesis, is that the theory will not be confirmed by the test. This is the equivalent of innocent until proven guilty in the courtroom. If a statistician rejects a null hypothesis that is in fact true this is an error of the first kind, also called type I error or false positive. On the other hand, if a false null hypothesis is not rejected this is an error of the second kind, also called type II error or false negative. In terms of the courtroom, a type I error would be a truly innocent person being found guilty, while a type II error would be letting a truly guilty person walk free.

Error Management Theory finds that under uncertain conditions, in conjunction with a pressure to survive, and when the costs of error of the first and second kind are disproportionate, people will bias their decisions so as to reduce the costly errors by increasing the errors that are not as costly. For example, it would be preferable to have more false negatives than false positives in criminal court convictions, because a greater injustice is served when the innocent is punished than when the guilty go unpunished. In other situations it might be false positives that are less costly and preferable, such as an indication of a disease in a blood test that later turns out to be nothing serious. Error Management Theory leads to the implication that it can be prudent to build bias into a model so as to minimize the most costly types of error. (“Adaptive Bias”)

The challenges of forecasting are too complex and nuanced to say that accuracy is paramount and that all predictions should be bias free. These are excellent rules of thumb and should be standards to strive for in most cases, but as we have seen it’s just not always that simple and easy. Adding an explicit bias to a forecasting model or decision system should not be done without a good reason. Forecasting for a favorable miss should not be done for its own sake. As Goodwin indicates a decision maker needs to know the true costs or errors, and this includes the differences in costs between errors of the first and second kind. With reliable data on the costs, managers can do as Batchelor suggests and incorporate these indicators of profitability into forecast model evaluation, along with accuracy. It is probably still a good idea to keep forecasters insulated from the decision to bias a forecast as much as possible, and leave it to managers to make the ultimate judgment call as to whether the costs associated with a particular decision warrant a systematic bias.

Jared Roy Endicott

Forecasting For a Favorable Miss Subscribe in a reader


Works Cited

Batchelor, Roy (2011). “Accuracy versus Profitability.” Foresight: The International Journal of Applied Forecasting, 21, 10-15.

Endicott, Jared (2010). “Favorability Bias in Forecasting”. Realizing Resonance - Futurist Philosophy Blog.

Goodwin, Paul (2009). “Taking Stock: Assessing the True Cost of Forecast Error.” Foresight: The International Journal of Applied Forecasting, 15, 8-11.

Madridakis, Spyros, Steven C. Wheelwright, and Rob J. Hyndman. Forecasting: Methods and Applications. Danvers, MA: Third Edition. John Wiley & Sons, Inc, 1998.

Rescher, Nicholas. Predicting the Future: An Introduction to the Theory of Forecasting. Albany, NY: State University of New York Press, 1998. Print.

Silver, Nate. The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t. New York: The Penguin Press, 2012.

SWIFT - Structured Workshop in Forecaster Training. CPDF - Certified Professional in Demand Forecasting, Delphus, Inc., 2010.

“List of Cognitive Biases”. Wikipedia. 21 Feb 13.

“Adaptive Bias”. Wikipedia. 21 Feb 13.


Back to Featured Articles on Logo Paperblog