Integrating, Adjusting, and Combining Procedures

Abstract of "Combining forecasts," J. Scott Armstrong - Full Text


To improve forecasting accuracy, combine forecasts derived from methods that differ substantially and draw from different sources of information. When feasible, use five or more methods. Use formal procedures to combine forecasts: An equal-weights rule offers a reasonable starting point, and a trimmed mean is desirable if you combine forecasts resulting from five or more methods. Use different weights if you have good domain knowledge or information on which method should be most accurate. Combining forecasts is especially useful when you are uncertain about the situation, uncertain about which method is most accurate, and when you want to avoid large errors. Compared with errors of the typical individual forecast, combining reduces errors. In 30 empirical comparisons, the reduction in ex ante errors for equally weighted combined forecasts averaged about 12.5% and ranged from 3 to 24 percent. Under ideal conditions, combined forecasts were sometimes more accurate than their most accurate components.

Keywords: Consensus, domain knowledge, earnings forecasts, equal weights, group discussion, rule-based forecasting, uncertainty.

Return to Practitioner's Page

Selection and Evaluation

Abstract of "Selecting forecasting methods," J. Scott Armstrong


I examined six ways of selecting forecasting methods: Convenience, “what’s easy,” is inexpensive, but risky. Market popularity, “what others do,” sounds appealing but is unlikely to be of value because popularity and success may not be related and because it overlooks some methods. Structured judgment, “what experts advise,” which is to rate methods against prespecified criteria, is promising. Statistical criteria , what should work,” are widely used and valuable, but risky if applied narrowly. Relative track records, what has worked in this situation,” are expensive because they depend on conducting evaluation studies. Guidelines from prior research, “what works in this type of situation,” relies on published research and offers a low-cost, effective approach to selection. Using a systematic review of prior research, I developed a flow chart to guide forecasters in selecting among ten forecasting methods. Some key findings: Given enough data, quantitative methods are more accurate than judgmental methods. When large changes are expected, causal methods are more accurate than naive methods. Simple methods are preferable to complex methods; they are easier to understand, less expensive, and seldom less accurate. To select a judgmental method, determine whether there are large changes, frequent forecasts, conflicts among decision makers, and policy considerations. To select a quantitative method, consider the level of knowledge about relationships, the amount of change involved, the type of data, the need for policy analysis, and the extent of domain knowledge. When selection is difficult, combine forecasts from different methods.

Keywords: Accuracy, analogies, combined forecasts, conjoint analysis, cross-sectional data, econometric methods, experiments, expert systems, extrapolation, intentions, judgmental bootstrapping, policy analysis, role playing, rule-based forecasting, structured judgment, track records, and time-series data.

Return to Practitioner's Page

Expert
Systems

Abstract of "Expert systems for forecasting," Fred Collopy, The Weatherhead School, Case Western Reserve University, J. Scott Armstrong, The Wharton School, University of Pennsylvania, and Monica Adya, Department of Information Systems, University of Maryland at Baltimore


Expert systems use rules to represent experts’ reasoning in solving problems. The rules are based on knowledge about methods and the problem domain. To acquire knowledge for an expert system, one should rely on a variety of sources, such as textbooks, research papers, interviews, surveys, and protocol analysis. Protocol analysis is especially useful if the area to be modeled is complex or if experts lack an awareness of their processes. Expert systems should be easy to use, incorporate the best available knowledge, and reveal the reasoning behind the recommendations they make. In forecasting, the most promising applications of expert systems are to replace unaided judgment in cases requiring many forecasts, to model complex problems where data on the dependent variable are of poor quality, and to handle semi-structured problems. We found 15 comparisons of forecast validity involving expert systems. As expected, expert systems were more accurate than unaided judgment, six comparisons to one, with one tie. Expert systems were less accurate than judgmental bootstrapping in two comparisons with two ties. There was little evidence with which to compare expert systems and econometric models; expert systems were better in one study and tied in two.

Keywords: inductive techniques, judgmental bootstrapping, knowledge acquisition, production systems, protocol analysis, retrospective process tracing.

Conjoint Analysis

Abstract of "Forecasting with conjoint analysis," Dick R. Wittink, Yale University, and Trond Bergestuen, Johnson Graduate School of Management, Cornell University


In this chapter we briefly describe conjoint analysis, a survey-based method heavily used by managers to obtain consumer input that guides new-product decisions. The commercial popularity of the method suggests that conjoint results improve management decisions. However, due to practical complexities, it is very difficult to obtain incontrovertible evidence about the external validity of conjoint results. Published studies, in which the predictive validities of alternative conjoint procedures are compared, typically rely on holdout tasks. We introduce and discuss six principles relevant to the forecast accuracy of conjoint results.

Judgmental
Bootstrapping

Abstract of "Judgmental bootstrapping: Inferring experts' rules for forecasting," J. Scott Armstrong, Wharton School, University of Pennsylvania


Judgmental bootstrapping is a type of expert system. It translates an expert's rules into a quantitative model by regressing the experts forecasts against the information that he used. Bootstrapping models apply an expert's rules consistently, and many studies have shown that decisions and predictions from bootstrapping models are similar to those from the experts. Three studies showed that bootstrapping improved the quality of production decisions in companies. To date, research on forecasting with judgmental bootstrapping has been restricted primarily to cross-sectional data, not time-series data. Studies from psychology, education, personnel, marketing, and finance, showed that bootstrapping forecasts were more accurate than forecasts made by experts using unaided judgment. They were more accurate for eight of eleven comparisons, less accurate in one, and there were two ties. The gains in accuracy were generally substantial. Bootstrapping can be useful when historical data on the variable to be forecast are lacking or of poor quality; otherwise, econometric models should be used. Bootstrapping is most appropriate for complex situations, where judgments are unreliable, and where experts judgments have some validity. When many forecasts are needed, bootstrapping is cost-effective. If experts differ greatly in expertise, bootstrapping can allow one to draw upon the forecasts made by the best experts. Bootstrapping aids learning; it can help to identify biases in the way experts make predictions, and it can reveal how the best experts make predictions. Finally, judgmental bootstrapping offers the possibility of conducting experiments when the historical data for causal variables have not varied over time. Thus, it can serve as a supplement for econometric models.

Keywords: conjoint analysis, expert systems, protocols, regression, reliability