The range was from 26% to 74% with an average of 52%.
Three students were over 70%, three were below 32%, and the others were evenly spread over the range.
The classes were interactive and fun. The person with the 26% had gone out of her way early in the course to say how interested she was and how hard she was working in the course.
Email a colleague about this learning tool using the email link above
Selfadministered Forecasting Exams
There are two purposes to the selfadministered exams. First, your can use them to guide your own learning. Second, if you are teaching a course that addresses any of these areas, you can use the exams to help students to learn the relevant material, and also to grade them on how much they learned. These two uses are discussed here:
Selfdirected learning program
The selfadministered tests allow you to conduct your own learning program. The preparation should put you in a good position to learn about important evidencebased findings related to forecasting. Many of these findings are not intuitively obvious.
One way to prepare for the selfadministered exams is to first study the recommended preparation materials for a given topic. Then find a learning partner. Each of you would then complete the test.
Your partner would grade your exam and provide feedback as to what percentage of the material you have mastered. This allows you to see if you understand the material well enough to explain it to another person.
An alternative approach is to take the exam prior to reading the preparation materials. Then grade your exam and then read the materials. This approach is frustrating, but it motivates people to relieve the frustration by studying the relevant aspects of the readings.
Still another approach is the read the questions and then try to memorize the answers. This is the lowfrustration approach. On the negative side, this type of learning will not stick with you very long.
Exams to be used in courses; or, â€œSteal this exam!â€
Instructors can assign the selfadministered exams to students as learning tasks. Interestingly, they can then use the exact same questions on an endofcourse exam. How can this be? Isnâ€™t it akin to stealing the exam?
In 2010, Scott Armstrong prepared a battery of 130 openended questions for a course on forecasting. The material related to the questions was discussed in the lectures and in related readings, and students were urged to apply these findings on their projects. The students received the questions well in advance of the final exam. They were advised to work with a learning partner. The exam consisted only of questions from this battery. Now get this: the answers were also provided.
The questions all relate to evidencebased findings, and they went beyond everyday knowledge. Thus, someone who had not studied the material would score around zero. So what do you think was the average exam grade and the range of scores in this class of 11 students?
Click to see the exam results.
Questions:
_____________________________________________________________
Judgmental Bootstrapping
Extrapolation Methods
Intentions and Expectations
Combining
Evaluating Forecasting Methods
April 3, 2010
 You would like to select forecasting software. How might you assess commercial software programs for forecasting, other than reading reviews and asking for references? List in order of importance.
a) request full disclosure of the methods
b) examine to what extent the packages adhere to the forecasting principles
c) request results and full disclosure about previous testing of the methods
d) test for reliability using same methods from different packages to forecast a given set of data
 What error measure would you use for:
a) selecting the most accurate method?
Median Relative Absolute Error (MdRAE)
b) identifying the series that involve the most important errors?
Mean Absolute Deviation (MAE) in consideration with the cost of errors, and perhaps Mean Error, including signs (to assess bias)
c) estimating prediction intervals?
For series with constant elasticities (common for economic series), assume the errors are symmetric and estimate prediction intervals using a loglog model. Then take antilogs to express the prediction intervals in real units.
 When should you use Rsquare (or r) as a measure of predictive power?
â€¢ Never for time series.
â€¢ For crosssectional data, r may be appropriate as a rough measure of predictability, especially for a holdout sample.
 When should you use statistical significance? And why?
Never. There are many better ways to estimate uncertainty and no experimental evidence has been provided to show that it leads to better decisions.
 When are ex post (conditional) tests useful?
When testing whether the effects of certain policies were accurately predicted.
 When should you use Root Mean Square Error (RMSE) in forecasting?
Experimental studies show that it is unreliable and it is very difficult to see how it relates to decisionmaking.
 A forecasting software firm claims that their methods give shortterm sales forecasts that are accurate within 3% of the true value. Would these be accurate forecasts?
If a firm made such a claim, I would eliminate them immediately. The key question is how well do other methods do in the same situation. In addition, one must specify what was being forecast and what were the time horizons. You would need assurance that the forecasts were ex ante, that they were replicated, that they were conducted by and independent third party, and that any potential sources of bias were revealed. In some cases one might turn to benchmark errors, but few of these have been published (see on the Practitionersâ€™ Page of forprin.com) and it is difficult to match your problem to the benchmarks.
 You have developed a model to forecast the batting averages of a set of baseball players. You estimated the model based it on a sample of 90 players. The client tells you that he heard that measures of fit are not indicative of true accuracy levels. He asks you to provide outofsample forecasts. How would you do that? How many outof sample forecasts would you use? Do you know the name of this procedure?
You can get 90 outofsample forecasts by excluding an observation, developing the model with 89 observations then predicting the holdout observation. You then replace the observation and remove other as the holdout. Continue until each of the observations has served as the holdout, This is known as the jackknife procedure.
For further study, see â€œEvaluating Forecasting Methodsâ€ in J. S. Armstrong (2001), Principles of Forecasting.
April 4, 2010
 You would like to select forecasting software. How might you assess commercial software programs for forecasting, other than reading reviews and asking for references? List in order of importance.
 What error measure would you use for:
a) selecting the most accurate method?
b) identifying the series that involve the most important errors?
c) estimating prediction intervals?
 When should you use Rsquare (or r) as a measure of predictive power?
 When should you use statistical significance? And why?
 When are ex post (conditional) tests useful?
 When should you use Root Mean Square Error (RMSE) in forecasting?
 A forecasting software firm claims that heir methods give shortterm sales forecasts that are accurate within 3% of the true value. Would these be accurate forecasts?
 You have developed a model to forecast the batting averages of a set of baseball players. You estimated the model based it on a sample of 90 players. The client tells you that he heard that measures of fit are not indicative of true accuracy levels. He asks you to provide outofsample forecasts. How would you do that? How many outof sample forecasts would you use? Do you know the name of this procedure?
For further study, see â€œEvaluating Forecasting Methodsâ€ in J. S. Armstrong (2001), Principles of Forecasting.
 When should you exclude historical data from a time series extrapolation?
Only when you have strong evidence that there were substantial changes from the current situation. For example, the way in which the data were collected may have been altered substantially or there might have been substantial changes in the definitions. You can substitute some type of average for the missing observation.
 When is it inappropriate to use seasonal factors?
â€¢ If you cannot develop a good rationale that the seasonal fluctuations have a causal explanation (e.g., as with stock market data)
â€¢ If you lack sufficient data (especially important when the causal forces are weak). You would typically need many years of data to get useful seasonal factors.
 How would you estimate seasonal factors when you have a few years of volatile data and where you have some expectation that the behavior is seasonal (such as grass seed or snow shovels)?
I would use damped seasonal factors. (See the MillerWilliams freeware at forecastingprinciples.com,)
 Extrapolation errors might occur because you do not have a good estimate of the current level, say as with sales for an item. How would you reduce errors due to a poor estimate of the level?
Use alternative ways to estimate the level (e.g., regression, exponential smoothing, naÃ¯ve, judgmental), and then combine them using equal weights unless you have strong evidence.
 What procedures should you use to extrapolate a trend when there is uncertainty about the trend estimate?
â€¢ Use damped trend (or combine the trend forecast with a naÃ¯ve â€“ no changeforecast).
â€¢ Find estimates of other time series subject to similar causal forces and obtain trends expressed in percentage terms. Combine these with the percentage trend in the series of interest.
 When are nonlinear methods useful for extrapolation?
Use only when the expected behavior is known to follow a nonlinear function. For example, logged data are typically used to represent economic data that grow in percentage terms.
 When is it appropriate to use cycles for annual data?
When there are welldefined events taking place at known times and the events have a strong impact on the series (e.g., the summer Olympics occur once every four years.)
 How should you estimate uncertainty when using extrapolation models?
Use empirically estimated error measures. Simulate the actual forecasting situation. Use the data to make ex ante forecasts, and then compare the forecasts with the actual values. Do this for each forecast horizon by using successive updating. Thus you could make one to fiveyear ahead forecasts starting in year t, calculate forecasts and errors from t+1 through t+5, then update the database by including t+1 in the database, and again make forecasts. You might use these empirically estimated uncertainty levels to address such questions as what is the error for the fouryearahead forecasts.
 What is a contrary series and how should it be extrapolated?
A contrary series is one in which the historical trend is opposite in direction to the expectation of domain experts. The causal forces should be identified prior to examining the data. Do not forecast trends from the data. A nochange forecast is often sufficient.
 Under what conditions are extrapolation methods useful?

 Many forecasts are needed, so cost is a factor
 No substantial changes are expected in the trend.
 The historical trend is long.
 The historical data are reliable and valid.
 A client decided to use an exponential smoothing program that he found through a Google search. He asked you to explain alpha and beta.
Alpha is the weight that you place on the most recent observation in a time series. So if alpha were 0.4, this would put 40% weight on the most recent observation with the rest, 60%, on the prior average. Beta plays the same role with the estimates of the trend. Thus, for a volatile series, you would use a lower alpha, and a lower beta.