In a cheekily titled article--"Forecast: Next Year Will Arrive in 2010-ish"--Carl Bialik asks experts, including Foresight editor Len Tashman and ForPrin.com directors Scott Armstrong and Kesten Green, why forecasts for 2009 have been so wrong. His article is subtitled "Recession Clouded Crystal Balls in Many Industries, Prompting Predictors to Regroup, Change Strategies and Exercise Caution" signaling his disappointment. In their essay "How to Forecast for Recessions and Recoveries" (on ManyWorlds; original) Armstrong and Green suggest using a simple rule from Collopy and Armstrong's Rule-based Forecasting in order to avoid embarrassment: When a time series is identified as “contrary,” do not extrapolate a trend.
Can you help us to improve the site by making a donation towards improving the functionality of the site's free Delphi Software forecasting tool?
You may have noticed the new "Donate" tab in the banner at the top of the ForPrin.com site. The Forecasting Principles site is provided as a public service for forecasters, researchers, teachers, students, and the consumers of forecasts. To provide this service, including the Special Interest Groups, we depend on sponsorship and advertising. Sponsorship was initially provided by The Wharton School. In more recent years, the International Institute of Forecasters has been and continues to be a generous sponsor and the site's key supporter.
The site's current revenue is not, however, sufficient to cover the cost of improving, extending, and developing forecasting support software tools such as the Delphi Software. For example, users of the Delphi software have asked for the ability to modify questions and change reporting, and to obtain more of the data that is collected in different formats.
Please check out the Donate page. Even a $10 donation would help us to keep improving the site for you.
The 2009 INFORMS Data Mining Contest is the second installment of a data mining contest that started last year in conjunction with the INFORMS conference. The contest again involves predictive problems for health care quality. The two tasks for this year's contest are: 1) modeling of a patient transfer guideline for patients with a severe medical condition from a community hospital setting to tertiary hospital provider and 2) assessment of the severity/risk of death of a patient's condition.
Predictions must be submitted by September 25. You may want to consider giving a talk at the Contest session at the INFORMS conference. Workshop acceptance will be based on successful submissions to the Contest. Contestants are encouraged to prepare a paper describing methods and insights to be considered for publication in a Special Issue in a data mining journal (to be determined). See the Contest site for more information.
A new working paper by Scott Armstrong and Andreas Graefe describes findings on the use of an index-method model based on 49 biographical variables to predict the outcomes of 28 U.S. presidential elections. It correctly predicted 25. Out-of-sample forecasts were more accurate than forecasts from 12 benchmark models. The authors welcome peer review.
Wright and MacRae used large-scale meta analysis to identify bias and variability in the forecasts from such scales. They found that converting purchase intentions to linear probability scales or proportions resulted in unbiased forecasts. The same was true of 11-point probability scales, but these had lower dispersion of forecast errors. This result gives strong support to Principle 11.4, as scale-point adjustments were not required to get unbiased forecasts. It also supports Principle 8.4, as the use of the longer 11-point scale reduced forecast error, and Principle 8.7 as there was much greater variability in forecast errors for studies with small samples. The meta-analysis was restricted to existing products and services, and did not investigate accuracy for new products.