Share this post
FaceBook  Twitter  

Much has been learned in the past half century about producing useful forecasts. Those new to the area may be interested in answers to commonly asked questions. You can access the information quickly through either the Topics menu or the Key Words menu immediately below. Or you can go directly to the Answers to Frequently-Asked Questions.


Topics menu

The menu below expands: to show the relevant questions: just click on a topic [Note: your cursor will not change its appearance], then click on a question to see the answer.

  1. Forecasting, the field
    1. What does the field of forecasting encompass?
    2. How does forecasting related to planning?
    3. Where does knowledge about forecasting come from?
  2. Types of forecasting problems
    1. Can you give me examples of different types of forecasting problems?
    2. How should you structure a forecasting problem?
    3. Is it important to use up-to-date data?
    4. Is it important to use much data?
    5. Should I collect as much data of different kinds as possible?
    6. Do I need to know about what causes changes in the thing I'm forecasting?
    7. I know the model's forecast is wrong. Should I adjust it?
    8. How can I incorporate expert knowledge about the situation, especailly knowledge about causality, in my forecast?
    9. How can I forecast if I don't have much quantitative (numerical) data?
  3. Common sense and forecasting
    1. Isn't common sense enough? That is, wouldn't it be difficult to improve upon good judgment?
    2. What methods are commonly used for forecasting?
    3. How can I learn about forecasting methods?
    4. Is software available that can help people to implement forecasting methods?
    5. How can I find the meanings of terms forecasters use?
  4. Choosing the best method
    1. What method is best for my situation?
    2. If evidence is unclear about which method is best, how should I decide among them?
    3. Which well-accepted methods should be used to provide benchmark forecasts?
    4. Is combining (averaging) forecasts a good idea?
  5. Assessing strengths and weaknesses of procedures
    1. How can I tell whether my organization would benefit from a formal approach to forecasting?
  6. Accuracy of forecasts
    1. Aren’t forecasts wrong more often than they are right?
    2. How should the accuracy of forecasts be compared?
    3. How can I estimate the uncertainty of, or confidence in, forecasts?
  7. Examining alternative policies
    1. What do you mean by "policies"?
    2. What methods are commonly used to predict the effects of different policies?
    3. What do I need to know about the situation to make useful predictions?
    4. What conditions apply to making useful policy predictions?
  8. New products
    1. What can be done when there is no history?
    2. Are growth curves and diffusion models useful for forecasting new products?
  9. Behavior in conflicts, such as negotiations and wars
    1. Can useful predictions to be made in such complex situations as negotiations or war?
    2. What about when extraordinary people are involved in the conflict?
  10. Effect of changing technology
    1. Of what value are forecasts that try to predict the discovery and impact of future technologies?
  11. Stocks and commodities
    1. Can changes in the stock market be accurately forecast? (Or, if you forecasters are so smart, why aren’t you rich?)
    2. People pay for stock market forecasts. Are you saying that they are irrational?
  12. Gaining acceptance
    1. I’ve adhered to forecasting principles; now how do I get management to use my forecasts?
    2. How can I best respond to criticism of my forecasts?
    3. Won’t our forecasts affect people’s decisions?
    4. To what extent can and should extreme but unusual events be taken into account?
  13. Keeping up to date
    1. How can a practitioner best keep up-to-date with new developments?
  14. Reading to learn more
    1. What other reading can I do to help me learn more?
  15. Help on forecasting
    1. Who can do forecasting?
    2. What organizations and publications are devoted to the subject of forecasting?
    3. How can I find someone to assist me?
  16. References on forecasting

Key Words Menu

alterative policies - G experts - B6, B7, C1, H1, I1 new products - H
accuracy - F expert witness - O1 Nobel Prizes - O1
benchmarks - D4, F1, L2 extrapolation - B5 organizations - O2
causal variables - B5, G3 errors - F persuasion (sell forecasts) - L
combining - D3, D5 forecasting (description) - A1 planning - A2
common sense - C forecasting (examples) - B1 research - A3
conflict with others - I help - O scenarios - L1
data - B3, B7 judgment - B7, C1, C2 software - C1
definitions - C5 markets - K statistical methods - B5, B6, C2
econometrics - B5, G2 methods - C2, C3, D technology - J

FAQ11

Answers to frequently-asked questions (FAQ)

Click on any highlighted word for a pop-up definition or on a highlighted reference to a publication for more information (Note: your browser must be set to allow pop-up windows or the definitions are not available to you).

A. Forecasting, the field

What is the field of forecasting?

The field of forecasting is concerned with approaches to determining what the future holds. It is also concerned with the proper presentation and use of forecasts. The terms “forecast,” “prediction,” “projection,” and “prognosis” are typically used interchangeably. Forecasts may be conditional. That is, if policy A is adopted then X will occur. Often forecasts are made for future values of a time-series; for example, the number of babies that will be born in a year, or the likely demand for compact cars. Alternatively, forecasts can be of one-off events such as the outcome of a union-management dispute or the performance of a new recruit. Forecasts can also be of distributions such as the locations of terrorist attacks or the occurrence of heart attacks among different age cohorts. The field of forecasting includes the study and application of judgment as well as of quantitative or statistical methods.

How does forecasting relate to planning?

Forecasting is concerned with what the future will look like, while planning is concerned with what it should look like. One would usually start by planning. The planning process produces a plan that is, along with information about the situation, an input to the forecasting process. If the organization does not like the forecasts generated by the forecasting process, it can generate other plans until a plan is found that leads to forecasts of acceptable outcomes. Of course, many organizations take a shortcut and merely change the forecast. (This is analogous to a family deciding to change the weather forecast so they can go on a picnic). For more on the roles of planning and forecasting, see “Strategic Planning and Forecasting Fundamentals” (PDF).

Where does knowledge about forecasting come from?

Research on forecasting has produced many changes in recommended practice, especially since the 1960s. We refer to recommended or best practice as principles. Most principles were derived from empirical comparisons of alternative forecasting methods. The most influential paper describing the findings of research of this kind is the M-competition paper (Makridakis, et al 1982). The M-competition was followed by other competitions, the most recent being the M3-Competition (Ord, Hibon, and Makridakis 2000).

Emphasizing empirical findings may appear to be obviously desirable. However, the approach is not always adopted, as empirical evidence sometimes conflicts with common beliefs about how to forecast. For example, the advice to base forecasts on regression models that fit historical time-series data has had a detrimental effect on accuracy. Sometimes the research findings have been upsetting to academics, such as the discovery that relatively simple models are more accurate than complex ones in many situations (Hogarth, 2010).

This site and a book, Principles of Forecasting, are outputs of the Forecasting Principles project. The project is an attempt to summarize findings from all prior research in a form that can be readily used by researchers, educators, students, and practitioners.

Back to top

B. Types of forecasting problems

Can you give me examples of different types of forecasting problems?

Sure. Forecasting problems can be posed as questions. Here are some examples. How many babies will be born in Pittsburgh, PA in each of the next five years? Will the incumbent president be elected for a second term? (See Political Forecasting.) Will a 3.5% pay offer avert the threatened strike? (See Conflict Forecasting.) How much inventory should we aim to hold at the end of this month for each of 532 items? What will be the growth rate of the economy over the next three years? Taking account of technical matters and concern among some communities, how long will it take to complete a planned pipeline? In which areas should policing efforts be concentrated in order to have the greatest effect on property crime? (See Crime Forecasting.) Which will be the most prevalent diseases in the U.K. ten years from now?

How should you structure a forecasting problem?

Forecasting is concerned with how to collect and process information. Decisions about how to structure a forecasting problem can be important. For example, when should one decompose a problem and address each component separately? Forecasting includes such prosaic matters as obtaining relevant up-to-date data, checking for errors in the data, and making adjustments for inflation, working days, and seasonality. Forecast error sometimes depends more on how information is used than on getting ever more accurate information. The question of what information is needed and how it is best used is determined by the selection of forecasting methods.

Is it important to use up-to-date data?

Yes. This common wisdom is supported by research although it is often violated in practice.

Is it important to use much data?

It is important to use data that spans a long time period or a wide range of similar situations. Doing so will reduce the risk that you will mistake short-term variations for fundamental trends or local anomalies for general findings.

Should I collect as much data of different kinds as possible?

Surprisingly, studies suggest that judgmental forecasters can become so overwhelmed with information that forecast accuracy is reduced (Armstrong 1985, 100-102.) Thus, there are conditions, under which it can be valuable to discard relevant information, for example, if one has only limited time or limited resources to properly take account all relevant information (Goldstein & Gigerenzer 2009). However, if you use formal methods to forecast, you should collect relevant data of different kinds.

Do I need to know about what causes changes in the thing I'm forecasting?

Not always. Much research has been done on how to forecast using only historical data on the variable that is to be forecast. For example, airline call-center traffic could be forecast using extrapolation methods. As shown in the Methodology Tree, extrapolation methods are useful in many situations.

I know the model’s forecast is wrong. Should I adjust it?

People think that they know better and revise forecasts from quantitative methods, usually reducing accuracy as a result. However, structured judgmental adjustment can be useful if (1) recent events are not fully reflected in the data, (2) experts possess good domain knowledge about future changes, or (3) it is not possible to include key variables in the model. In general, minor revisions should be avoided. If the conditions are met, use written instructions for the task, solicit written adjustments, request adjustments from a group of experts, ask for adjustments to be made prior to seeing the forecast with a given method, record reasons for the revisions, and examine prior forecast errors. For details, see Goodwin (2005).

How can I incorporate expert knowledge about the situation, especially knowledge about causality, in my forecast?

People often have useful knowledge about a particular problem, which is referred to as domain knowledge. One approach to making effective use of domain knowledge consists of providing graphical decision support for judgmental forecasting (Edmundson, 1990). Rule-based forecasting is another approach that combines expert domain knowledge with statistical techniques for extrapolating time series. In using use rule-based forecasting, most series features can be identified automatically, but domain knowledge is needed to identify some features, particularly causal forces acting on trends (Collopy and Armstrong, 1992).

If data are available on variables that are known to affect the situation of interest, causal models are possible. Theory, prior research, and expert domain knowledge provide information about relationships between the variable to be forecast and causal variables. Since causal models can relate planning and decision-making to forecasts, they are useful if one wants to create forecasts that are conditional upon different states of the environment. More important, causal models can be used to forecast the effects of different policies.

Regression analysis is suitable if there are only a few relevant variables and there are many reliable observations that include causal variables changing independently of each other. Regression analysis involves estimating causal model coefficients from historical data. Models consist of one or more regression equations used to represent the relationships between a dependent variable and explanatory variables. Important principles for developing regression or econometric models are to (1) use prior knowledge and theory, not statistical fit, for selecting variables and for specifying the directions of effects (2) use simple models, and (3) discard variables if the estimated relationship conflicts with theory or prior evidence.

The index method can be used if there are many variables and much domain knowledge about how the variables affect an outcome. Many forecasting problems involve few observations and many relevant variables, and such problems are more realistically modeled using the index method. Index scores are calculated by adding the values of the explanatory variables, which may be assessed subjectively as, for example, zero or one. If there is good prior domain knowledge, explanatory variables may be weighted relative to their importance. Index scores can be used as forecasts of the relative likelihood of an event. They can also be used to predict numerical outcomes, for example, by regressing index scores against historical data on a quantitative dependent variable.

Segmentation is useful when a heterogeneous whole can be divided into homogenous parts that act in different ways in response to changes in causal variables, and that can be forecasted more accurately than the whole. For example, in the airline industry, price has different effects on business and personal travelers. Appropriate forecasting methods can be used to forecast individual segments. For example, separate regression models can be estimated for each segment. Armstrong (1985, p. 287) reported on three comparative studies on segmentation. Segments were forecasted either by extrapolation or regression analysis. Segmentation improved accuracy for all three studies.

How can I forecast if I don’t have much quantitative (numerical) data?

It is often the case that one would like a forecast but there is little or no quantitative data. All is not lost! If you have a look at the left hand (judgmental) branch of the Methodology Tree, you will see a reassuring variety of forecasting methods that do not depend upon quantitative data.

Back to top

C. Common sense and forecasting

Isn’t common sense enough? That is, wouldn’t it be difficult to improve upon good judgment?

One reason for avoiding judgmental forecasts is that, in many cases, they are more expensive than quantitative methods. If it is necessary to make inventory control forecasts every week each of 50,000 items, judgment cannot be used. Another reason for avoiding judgmental forecasts is that they are usually less accurate than formal methods. Research has shown that judgmental forecasts are subject to many biases such as optimism and overconfidence. Nigel Harvey (2001) described how to overcome many of these biases.

If you need convincing that credible experts often make abysmal forecasts, see Cerf and Navasky (1998). For example, John von Neumann in 1956 said “A few decades hence, energy may be free - just like un-metered air.” Tetlock (2005) analyzed more than 82,000 expert predictions of political and economic events. He found that experts were little more accurate than simple rules like ‘predict no change’ or ‘predict most recent rate of change’.

What methods are commonly used for forecasting?
As shown in the Methodology Tree, forecasting methods can be classified into those that are based primarily on judgmental sources of information and those that use statistical data. There is overlap between some judgmental and statistical approaches.

How can I learn about forecasting methods?

Many books have been published about forecasting. For a listing of those published since 1990, along with reviews, see Text/Trade Books. One of the more popular is Makridakis, Wheelwright, and Hyndman’s (1998); now in its fifth edition, it describes how to use a variety of methods. The International Symposium on Forecasting brings together practitioners, academics, and software exhibitors in June or July of each year. The purpose of the Principles of Forecasting book is to summarize knowledge about forecasting methods. And, of course, there is this site.

Is software available that can help people to implement forecasting methods?

There are many good special-purpose forecasting programs. For descriptions, reviews, and surveys, go to Software. Some programs help the user to conduct validations of ex ante forecasts by making it easy to use successive updating and by providing a variety of error measures. The Delphi software allows users to conduct multiple round surveys among anonymous experts. For an assessment of software, see Use of Principles.

How can I find the meanings of terms forecasters use?

Forecasting methods and principles have been developed in many different fields, such as statistics, economics, psychology, finance, marketing, and meteorology. The primary concern of researchers in each field is to communicate with other academics in their field. The Forecasting Dictionary has been developed to aid communication among groups.

Back to top

D. Choosing the best method

What method is best for my situation?

There are many methods that can be used to forecast. Which are relevant to your situation depends upon your objectives and the conditions you face (such as what types of data are available). To find a method using a framework based on prior research, use the Selection Tree. Often, there is no single best method. In fact, it is best to use different methods and combine their forecasts.

If evidence is unclear about which method is best, how should I decide among them?

In general, do not try to identify the best method. Combine forecasts if it is suitable to use several methods. You should select methods that seem relevant, make forecasts with each, and then average the forecasts. This advice is based on findings from 30 comparative studies as described in Combining Forecasts. This procedure always improved forecasts and the typical error reduction was 12%. As shown by Graefe et al. (2010), you will most likely achieve even higher gains in accuracy if you combine forecasts that use different methods and draw upon different data.

If you still want to know which method is best, one approach you could take is the same as that of forecasting researchers: conduct your own tournament or competition among alternative forecasting methods. A number of forecasting software packages are useful for conducting tournaments. To do this successfully you need to meet three conditions: 1) compare methods on the basis of ex ante performance (the accuracy of the forecasts, not how well they fit the historical data); 2) compare forecasts with those from well-accepted methods; and 3) use an adequate sample of forecasts. An example of this procedure can be found in Collopy, Adya, and Armstrong (1994). See details in Armstrong (2001a).

Which well-accepted methods should be used to provide benchmark forecasts?

The simplest forecasting method for time series is the random walk. It assumes that the future value of a time series will be equal to the current value. In other words, one does not have useful information about the future changes in the series: it is equally likely to go up or down. The random walk provides relatively accurate forecasts in many circumstances, and should serve as one basis for comparing performance. Forecasts from simple exponential smoothing methods are also used as benchmarks in series where long-term trends are well established (as in long-term extrapolations of stock markets). For cross-sectional data, one can use the group average (or “base rate”) as a forecast.

When available, markets offer good benchmarks. Market prices are based on large numbers of people who put money on their forecasts. So, for example, studies since the 1930s have shown that, without inside knowledge, it has been impossible to improve upon current prices in forecasting the stock market. Prediction markets are conducted for the primary purpose of aggregating information and use the price system of a market to incorporate human judgment and translate it into a numerical estimate (Wolfers and Zitzewitz 2004). Numerous forecasts (e.g., on who will be the next president, how much money a movie will make at the opening weekend, or whether the government will pass a bailout or attack a certain country) from many public prediction markets are easily obtainable at virtually no cost and thus offer good benchmarks. However, the benefits from implementing prediction markets within an organization might be limited. A review of the empirical research available to date found that prediction markets were little more accurate than alternative forecasting methods and they are difficult to implement (Graefe & Armstrong 2010).

Is combining (averaging) forecasts a good idea?

It is still common to hear the recommendation that one should search for the one correct model of reality to accurately forecast what will happen, and that averaging forecasts from different methods yields only average performance. These and other reasons why people do not combine forecasts are described in Graefe et al. (2010).

As intellectually appealing as these arguments might seem, a large body of research confirms that combining forecasts from different methods and from independent experts improves accuracy. One perfect model is seldom applicable in the management and social sciences. Combining helps forecast accuracy by evening-out biases and including diverse information and models, each capturing different aspects of reality (Armstrong, 2001b; Graefe et al. (2010)).

Back to top

E. Assessing strengths and weaknesses of procedures

How can I tell whether my organization would benefit from a formal approach to forecasting?

You could conduct an audit of your organization’s procedures. It should take you about two hours to do the audit, depending on the complexity of the problem and on your expertise with forecasting methods.

Back to top

F. Accuracy of forecasts

Aren’t forecasts wrong more often than they are right?

This is a trick question. Some things are inherently difficult to forecast and, when forecasting numerical quantities, forecasters can seldom be exactly right. To be useful, a method must provide forecasts that are more accurate than chance. This condition can often be met, but one should not assume that it will be. A good forecasting procedure is one that is better than other reasonable alternatives. Benchmark forecast errors are available for corporate earnings, new products, sales, and employment.

How should the accuracy of forecasts be compared?
Forecast accuracy is compared by measuring errors. In general, the error measure used should be the one that most closely relates to the decision being made. Ideally, it should allow you to compare the benefits from improved accuracy with the costs for obtaining the improvement. Unfortunately, this is seldom possible to assess, so you might simply use the method or methods that provide the most accurate forecasts. Some commonly used error measures are Mean Absolute Deviation (MAD), Mean Square Error (MSE), R2 (or R-squared), Mean Absolute Percentage Error (MAPE), Median Absolute Percentage Error (MdAPE), and Median Relative Absolute Error (MdRAE).

The selection of a measure depends upon the purpose of the analysis. For example, when making comparisons of accuracy across a set of time series, it is important to control for scale, the relative difficulty of forecasting each series, and the number of forecasts being examined. For more detailed information, see the set of papers published in the International Journal of Forecasting, 8 (1992), 69-111.

A word of caution: two popular measures should be avoided. The first of these, R2 (which assesses the pattern of the forecasts relative to that of the actual data), is not particularly useful to forecasters and its use does more harm than good when using time-series data. The second, Mean Square Error, should not be used because it is unreliable; in addition, it is difficult to explain the results to decision makers (Armstrong 2001a).

How can I estimate the uncertainty of, or confidence in, forecasts?

In many situations it is useful to be, say, “95% confident” that the actual value will be between X and Y, or that the actual outcome will be Z. Unfortunately, uncertainty over the forecast horizon typically cannot be well estimated from how closely forecasts from a method fit the historical data. For advice on estimating uncertainty in time series forecasts, see Chatfield (2001). In general, the best that can be done is to simulate the forecasting situation as closely as possible. Thus, to determine how well one can forecast two years into the future, examine a sample of two-year-ahead ex ante forecasts. “Ex ante” means that you are looking as if “from before” and you do not use knowledge about the situation after the starting point for forecasting.

Back to top

G. Examining alternative policies

What do you mean by "policies"?

It is common to hear debate about, for example, which health policy a government should or will adopt. A government policy when adopted might take the form of law or regulations or instructions to government employees. We use the term “policy” broadly to include, for example, the prices a company charges for its products, the arrangement of employees’ work space, the type of information a board provides to shareholders, the extent to which pesticide residues are monitored, the setting of the overnight cash rate, etc.

You may wish to forecast either which would be the best policy alternative to adopt, or the effect of external policy changes on your organization.

What methods are commonly used to predict the effects of different policies?

Judgmental bootstrapping, expert systems, conjoint analysis, and causal methods (see Allen & Fildes 2001 for the latter) are the primary methods.

What do I need to know about the situation to make useful predictions?

In brief, you need to know about the relevant causal relationships. Theory is helpful. For example, in many situations it is useful to know the economic theory that an increase in the price of a thing will tend to lead to a decrease in the quantity demanded, and vice versa. Experts will tend to know about evidence from prior research, such as which causal (explanatory) variables are important, and the direction (and magnitude) of relationships.

What conditions apply to making useful policy predictions?

There is a strong causal relationship between the policy and the thing you are interested in forecasting; the relationship can be estimated; the policy is likely to change substantially over the forecast horizon or you wish to predict the effect if it did change; and you decide what the policy will be or you can predict what it will be, or you wish to develop contingency plans in case of a change.

Back to top

H. New products

What can be done when there is no history?

In situations where there is no history, one has to use judgmental methods. These include expert opinions, and the intentions or expectations of customers. In general, structured approaches provide forecasts that are more accurate than unstructured ones.

One such structured method is the Delphi method (Rowe and Wright 1999). Delphi involves asking from 5 to 20 independent, heterogeneous, and unbiased experts to make independent forecasts and to provide justification. An administrator provides anonymous feedback in the form of statistics and justifications, and the process is repeated until the individual experts’ forecasts change little between rounds. The Delphi forecast is the median of the experts’ final forecasts.

If there is data on analogous situations, use this information in a structured way (Green and Armstrong 2007).

The way that a forecasting problem is structured can have a big impact on forecasts. Decomposition of the problem is useful under certain conditions (MacGregor 2001).

Are growth curves and diffusion models useful for forecasting new products?

It seems reasonable to expect that diffusion models would be relevant for predicting demand for many types of products, and hundreds of studies have been published on the topic. However, little validation research has been done, so it is difficult to say whether they do, in fact, help. See Meade and Islam’s (2001) review.

Back to top

I. Behavior in conflicts, such as negotiations and wars

Can useful predictions to be made in such complex situations as negotiations or war?

This is an empirical question. The answer is “no” if unaided expert judgment, whether by an individual or a group, is used. Fortunately, there are two methods that do provide useful forecasts about behavior in conflict situations. They are structured analogies and simulated interaction(an adaptation of role playing). Evidence on the former method is given in Green and Armstrong (2007) and evidence on the latter method is provided in Green (2002) and Green (2005).

What about when extraordinary people are involved in the conflict?

The belief that people’s decisions are a reflection of their personality rather than a common response to the situation they are in is widely held and has been termed the “fundamental attribution error”. Again the question is an empirical one, and the conflicts that have been used in research, which involved many extraordinary people, were forecast well by structured analogies and by simulated interaction. Descriptions of the real, but in some cases disguised, conflicts used in the research are available at conflictforecasting.com.

Back to top

J. Effect of changing technology

Of what value are forecasts that try to predict the discovery and impact of future technologies?

Forecasting the future of technology is a dangerous enterprise. Schnaars (1989) examined hundreds of technology forecasts. He found that there is a myopia, even among experts, that causes them to focus upon the future in terms of present conditions. Cerf and Navasky (1998) gave interesting examples of errors in expert judgments about the future of technology. Perhaps the most famous is the 1899 call by the US Commissioner of Patents to abolish the Patent Office on the grounds that there was nothing left to invent.

Back to top

K. Stocks and commodities

Can changes in the stock market be accurately forecast? (Or, if you forecasters are so smart, why aren’t you rich?)

Stock market prices, like prices in any market, represent the combined forecasts of a large number of unbiased experts when the experts have access to the available information. Studies since the early-1930s suggest that we are unlikely to find procedures to improve upon the forecasts of markets. However, we can predict that there will be a never-ending stream of claims by some people that they or their models can do better than the market. We need to remember that with so many experts making predictions, some will get it right simply by luck. With this fact in mind we should avoid being driven to blind faith that mathematics can reveal opportunities or that seers really do exist.

People pay for stock market forecasts. Are you saying that they are irrational?

There are different ways to look at this question. One is that information about stocks is diffused through these forecasts. Another is that people like to avoid the responsibility for forecasting, so they turn the job over to a medicine man.

Back to top

L. Gaining acceptance

I’ve adhered to forecasting principles; now how do I get management to use my forecasts?

The rational approach, “tell and sell”, does not seem to work well when the forecaster brings bad news that implies change is necessary. If possible, obtain prior agreement from forecast users on the methods you will use and get the client’s commitment to accept forecasts from the agreed upon process. One approach that has been successful is the use of scenarios. For example, G. T. Chesney wrote a vivid description of the defeat of the British by the Germans in the 1872 Battle of Dorking for Blackwood’s Magazine. The scenario was published in 1871, creating both sensation and policy changes. Note that “scenarios” does not mean “alternatives”, as the term is sometimes used in pop management and spreadsheets: scenarios are detailed written stories about what “happened” in the future. Principles for writing scenarios are summarized by Gregory and Duran (2001).

How can I best respond to criticism of my forecasts?

Following good forecasting practice does not guarantee accurate forecasts on every occasion. One approach you could take to answering critics is to compare the accuracy of your forecasts to a suitable benchmark. Unfortunately, benchmarks are not readily available for all types of forecasting. If there is no benchmark relevant to your forecasts, you will need to show that you followed best forecasting practice. To do this, you can conduct an audit of the forecasting process you used and, if you did adhere to the relevant principles, you will get a good report that you can show critics.

Won’t our forecasts affect people’s decisions?

Sometimes forecasts affect the thing being forecast. For example, a publicly announced prediction of shortages may cause people to stockpile, thereby ensuring a shortage. Alternatively a forecast of reduced sales in the September quarter may lead a manufacturer to run a promotional campaign to increase sales. In situations like these, you need to rely on evidence from academic research to determine whether your forecasting process is a good one. To find out whether this is so, you can conduct an audit.

To what extent can and should extreme but unusual events be taken into account?

An earthquake in California, a terrorist attack on a tourist resort, or the unexpected death of a political leader might lead to enormous forecast errors. Base rates (historical frequencies) are likely to provide adequate forecasts of such events. For example, one could consult actuarial life tables to assess the likelihood that a 57-year-old male political leader who does not smoke will die in the next three years. If the impact of the possible event on the forecast is high but the base rate is indeed low, you may want to consider sticking with your forecast but paying up to the expected cost of the event (probability * financial damage) in insurance premiums. Alternatively, it might be cheaper to accommodate unlikely eventualities in your organization’s plans.

Back to top

M. Keeping up to date

How can a practitioner best keep up-to-date with new developments?

If you are like most practitioners, you probably do not read the research literature. No need to feel guilty about that. A recent study found that only about 3% of the papers that are published on forecasting are useful (Armstrong & Pagell 2003). These papers, virtually all by academics, are difficult to find. Our estimate is that, on average, about one useful paper is published each month. Unfortunately it might take many months to find it among all the other papers. Once found, they are quite difficult to read and they often omit key information such as under what conditions the proposed method works. As a result, reading a stand-alone article may not be very useful. The Forecasting Principles project was designed to relieve you of the responsibility for doing all this reading and interpretation. It finds and interprets research findings, putting them in the form of principles (given such a situation, do the following) and makes them freely available on this website (forecastingprinciples.com). Principles of Forecasting describes the principles and summarizes the evidence supporting them. So the short answer is that once you are up to speed, check in on this web site to find out about new developments.

Back to top

N. Reading to learn more

What other reading can I do to help me learn more?

Reviews of books on forecasting are available on this site, as is a list of newer books awaiting review.

Back to top

O. Help on forecasting

Who can do forecasting?

Anyone is free to practice forecasting for most products and in most countries. This has not always been true. Societies have been suspicious of forecasters. In A.D. 357, the Roman Emperor Constantius made a law forbidding anyone from consulting a soothsayer, mathematician, or forecaster. He proclaimed: “…may curiosity to foretell the future be silenced forever."

It is sensible for a person practicing forecasting to have been trained in the most appropriate methods for the problems they face. Expert witnesses who forecast should expect to be examined on their familiarity with methods. One measure of witness expertise is whether they have published in the area in which they claim expertise. In a recent U.S. Supreme Court ruling, while publication was not accepted as a necessary condition for being an expert witness, it was regarded as an important qualification.

The development of well-validated forecasting methods has improved the status of forecasting expertise. Nobel Prizes for Economics have gone to economists, including Engle, Granger, Klein, Leontief, Modigliani, Prescott, Samuelson, and Tinbergen, who have contributed to forecasting methodology.

What organizations and publications are devoted to the subject of forecasting?

Some organizations provide forecasts. Because research on forecasting comes from many disciplines, since 1980 efforts have been made to unify the field. There is an academic institute (International Institute of Forecasters), two academic journals (the Journal of Forecasting and the International Journal of Forecasting), and a journal for practitioners (Foresight – The International Journal of Applied Forecasting).

How can I find someone to assist me?

Nearly every major business school has someone interested in forecasting. The International Institute of Forecasters (IIF) has a membership list and a list of consultants.

Back to top

P. Additional references on forecasting

Armstrong, J. Scott (2001), Principles of Forecasting: A Handbook for Researchers and Practitioners. Boston: Kluwer Academic Publishers. (Available on amazon.com)

Cerf, Christopher & Victor Navasky (1998), The Experts Speak, 2nd Edition. New York:Villard. (Available on amazon.com)

Edmundson, Robert H. (1990), "Decomposition: A strategy for judgmental forecasting," Journal of Forecasting, 9, 305-314.

Kahneman, Daniel, Paul Slovic & Amos Tversky (1982), Judgment Under Uncertainty. Cambridge: Cambridge University Press. (Available on amazon.com) - Review

Makridakis, Spyros et al. (1983), "The accuracy of extrapolation (time series) methods: Results of a forecasting competition," Journal of Forecasting, 1 (1982), 111-153. Commentary on this study was published in the Journal of Forecasting, 2, 259-311.

Ord, Keith, M. Hibon & S. Makridakis (2000), "The M3-Competition," International Journal of Forecasting, 16, 433-537.

Schnaars, Steven P. (1989), Megamistakes. New York: The Free Press. (Available on amazon.com)

Surowiecki, James (2004), The Wisdom of Crowds. New York: Doubleday. (Available on amazon.com)

Back to top

Updated in June 2010 by

This email address is being protected from spambots. You need JavaScript enabled to view it. , Marketing Department, The Wharton School, University of Pennsylvania

This email address is being protected from spambots. You need JavaScript enabled to view it. , International Graduate School of Business and Ehrenberg Bass Institute for Marketing Science, University of South Australia, Adelaide, South Australia

Andreas Graefe, Institute for Technology Assessment and Systems Analysis at the Karlsruhe Institute of Technology, Germany

Acknowledgements

Our thanks to This email address is being protected from spambots. You need JavaScript enabled to view it. , Department of Information Systems, Weatherhead School of Management, Case Western Reserve University, who had originally proposed the development of a FAQ and who helped in the first edition, roughly a decade ago.

Over the years, this has been one of the most frequently read sections of the site. It has been updated many times. Each time we are surprised how much has been learned since the previous update. Forecasting continues to benefit from experimental and quasi-experimental research.

Back to top