Papers Related to RBF

  • Monica Adya, Fred Collopy, J. Scott Armstrong, and Miles Kennedy (2001), "Automatic Identification of Time Series Features for Rule-Based Forecasting", International Journal of Forecasting, 17, 143-157. Full Text (LINK TO: ) - Automatic procedures, which areless expensive and more reliable than judgmental procedures, produced rule-based forecasts with little loss in forecast accuracy.
  • Adya, M., 2000 "Corrections to Rule-based Forecasting: Findings from a Replication", International Journal of Forecasting, 16, 1, 125-128. Full Text
  • Adya, M., J. Armstrong, F. Collopy, & M. Kennedy (2000), "An Application of Rule-based Forecasting for a Situation Lacking Domain Knowledge," International Journal of Forecasting, 16, 477-484. Full Text – This paper was part of the M3-competition. A simplified version of RBF performed well without using domain knowledge.
  • Balhadjali, M., M. Luska, & D. Matzner, (2004), “A Test of a Minimalist Rule-Based Forecasting System,” Working paper series - retrieved March 13, 2008, from Full Text
  • Fred Collopy and J. Scott Armstrong (1992), "Rule-Based Forecasting: Development and Validation of an Expert Systems Approach to Combining Time Series Extrapolations," Management Science, 38 (10), 1394-1414. - Full Text – Uses prior knowledge about forecasting methods and domain knowledge to formulate rules for time series forecasting.
  • Fred Collopy and J. Scott Armstrong (1989), "Toward Computer-Aided Forecasting Systems: Gathering, Coding, and Validating the Knowledge," in George R. Widmeyer (ed.), DSS-899 Transactions: Ninth International Conference on Decision Support Systems, Institute of Management Science, pp. 103-119 – Describes how to develop rules for forecasting.
  • Gardner, Jr., E.S. (1999), "Rule-Based Forecasting Vs. Damped-Trend Exponential Smoothing," Management Science, 45, (8, August), 1169-1176.
  • Makridakis, S., & Hibon, M. (2000), "The M3-Competition: Results, Conclusions, and Impolications," International Journal of Forecasting, 16, 451-476. - Full Text

Papers Related to Causal Forces

  • J. S. Armstrong, Fred Collopy, and J. Thomas Yokum, "Decomposition by Causal Forces: A Procedure for Forecasting Complex Time Series," forthcoming in International Journal of Forecasting. – Full Text
  • J.Scott Armstrong and Fred Collopy (2001), "Identification of Asymmetric Prediction Intervals through Causal Forces," Journal of Forecasting, 20, 273-283. Full Text – When forecast errors are large, as is common in annual forecasting, errors are asymmetrical in percentage terms. Log transformations can correct for this asymmetry, although asymmetry in the logs occurs for "contrary" time series.
  • J.Scott Armstrong and Fred Collopy (1993), "Causal Forces: Structuring Knowledge for Time-series Extrapolation," Journal of Forecasting, 12, 103-115. Full Text – Domain knowledge, expressed as expectations about trends in time series, led to improved accuracy in time-series forecasts.

Studies that use Production Rules for Forecasting

  • Li, X., Ang, C.L., & Gray, R. (1999), "An Intelligent Business Forecaster for Strategic Business Planning," Journal of Forecasting, 18(3), 181-204.
  • Miller, P.L. Frawley, S.J., Sayward, F.G., Yasnoff, W.A., Duncan, L., and Fleming, D.W. 1977), "Combining Tabular, Rule-based, and Procedural Knowledge in Compter-Based Guidelines for Childhood Immunization," Computers and Biomedical Resarch, 30, 211-231. Full Text
  • Nikolopoulos, N. & Assimakopoulos, V. (2003), "Theta Intelligent Forecasting Information System," Industrial Management & Data Systems, 103(9), 711-726
  • Rahman, S. (1990), "Formulation and Analysis of a rule-based short-term load forecasting algorithm," Proceedings of the IEEE, 78(5), 805-816.
  • Rahman, S. & Baba, M. (1989), "Software Design and Evaluation of a Microcomputer-based Automated Forecasting System," IEEE Transaction on Power Systems, 4(2), 782-788.
  • Tavanidou, E., Nikolopoulos, K., Metaxiotis, K., and Assimakopoulos, V. (2003), "An Innovative e-Forecasting Web Application," International Journal of Software Engineering and Knowledge Engineering, 13(2), 215-236.

Studies that use Data and Time Series Features to Select Forecasting

  • Prudencio, R. & Ludermir, T. (2004), "Using Machine Learning Techniques to Combine Forecasting Methods," in AI2004: Advances in Artificial Intelligence, 3339/2004, 1122-1127.
  • Prudencio, R., Ludermir, T., & de Carvalho, F. (2004), "A Model Symbolic Classifier for Selecting Time Series Models," Pattern Recognition Letters, 25(8), 911-921.
  • Venkatachalam, A.R. & Stohl, J.E. (1999), "An Intelligent Model Selection and Forecasting System, " Journal of Forecasting, 18(3), 167-180.

Other Relevant Research on RBF

  • Armstrong, J. (2006), "Findings from Evidence-based Forecasting: Methods for Reducing Forecast Error, " International Journal of Forecasting, 22, 583-598. - Full Text
  • Bunn, D., L. Menezes, & J. Taylor (2000), "Review of Guidelines for the Use of Combined Forecasts," European Journal of Operational Research, 120, 190-204. - Full Text
  • Webb, G. (2003), "A Rule Based Forecast of Computer Hard Drive Costs,"Issues in Information Systems, 4, 337-343. Full Text
  • Cohen, E., & Z. Schwartz, (2004), "Hotel Revenue-Management Forecasting: Evidence of Expert-Judgment Bias," Cornell Hotel & Restaurant Administration Quarterly, February.- Full Text
  • Shah, C. (1997), "Model Selection in Univariate Time Series Forecasting Using Discriminant Analysis, " International Journal of Forecasting, 13 (4), 489-500.
  • Morwitz, V.G. and D.C. Schmittlein, (1998) "Testing New Direct Marketing Offerings: The Interplay of Management Judgment and Statistical Models", Management Science, 44 (5, May), 610-628.
  • Tashman, L. "Out-of-sample tests of forecasting accuracy: An analysis and review", International Journal of Forecasting, 2000, Vol. 16(4), 437-450.
  • Vokurka, R.J. and B.E. Flores, S.L. Pearce (1996), "Automatic feature identification and graphical support in Rule-based forecasting: A comparison", International Journal of Forecasting, 12, 495-512.
  • Armstrong, J.S. (1999), "Forecasting for Environmental Decision Making," in Dale, V.H. & English, M.E. (eds.), Tools to Aid Environmental Decision Making, New York: Springer-Verlag, pp 192-225. - Full Text
  • Armstrong, J.S. (1989), "Combining Forecasts: The End of the Beginning or the Beginning of the End," International Journal of Forecasting, 5, 585-599. - Full Text
  • Armstrong, J.S. & Brodie, R.J. (1999), "Forecasting for marketing," in Hooley, G.J. & Hussey, M.K. (eds), Quantitative Methods in Marketing, 2nd Ed., London: International Thompson Business Press, pp. 92-119. - Full Text
  • De Menezes, L., Bunn, D.W., & Taylor, J.W. (2000), "Review of Guidelines for the Use of Combined Forecasts," European Journal of Operational Research, 120, 190-224. - Full Text
  • Sanders, N. & Ritzman, L.P. (2004) "Integrating Judgmental and Quantitative Forecasts: Methdologies for Pooling Marketing and Operations Information," International Journal of Operations & Production Management, 24(5), 514-529. - Full Text
  • Tashman, L.J. & Kruk, J.M. (1996), "The use of protocols to select exponential smoothing procedures: A reconsideration of forecasting competitions," International Journal of Forecasting , 12(2), 235-253.

Global Health and RBF

  • Green, K. C. & J. S. Armstrong (2007), “Global Warming: Forecasts by Scientists Versus Scientific Forecasts,” Energy and Environment, 18, 997-1021. Full Text
  • Sekhri, N., (2006), “Forecasting for Global Health: New Money, New Products & New Markets,” Retrieved March 13, 2008, from - Full Text

Demography and RBF

  • Bijak, J., (2006), “Forecasting International Migration: Selected Theories, Models and Methods,” Retrieved March 13, 2008, from Full Text

If any of your published research is relevant to this section, please e-mail the paper to This email address is being protected from spambots. You need JavaScript enabled to view it.. 

Subsequent to development of the automated feature detection routines, coding RBF features now takes under a minute. The only feature that continues to be manually coded is causal forces. This has provided great opportunity for us to participate in competitions such as the M-3 Competition . In this competition which required participants to forecast both short (e.g. monthly, quarterly) and long (annual) time series, we faced two challenges. Probably the most critical was that related to forecasting short period data. RBF had been developed, validated, and tested on annual data. Our rules would require recalibration to work with short period data. Furthermore, we would need to build a seasonality component. A second challenge was coding 3003 series on causal forces. Three key strategies were used:

  • We assumed that causal forces for all the 3003 time series were unknown.
  • We included a simple seasonality component in RBF.
  • We calibrated some key rules in order to forecast the short period data.

Despite these constraints, particularly that related to causal forces assumption, RBF was one the leading forecasting methods for the annual series while performed very well on the longer horizons for short period series. Results of this competition are available in a 2002 special issue on the M-3 Competition in International Journal of Forecasting.

RBF consists of 99 rules that were originally published in Collopy and Armstrong (1992). These rules combine forecasts using weights that vary according to features of time series. Four simple methods are combined in RBF:

  • Random walk which emphasizes the short range perspective and assumes there is no trend in the data.
  • Linear regression which represents the long range and provides a basic trend estimate.
  • Holt’s exponential smoothing represents the short range and provides estimate of recent trend.
  • Brown’s exponential smoothing which provides short range estimates.

Since factors impacting the short and long term often differ, RBF rules develop weights for the short and long term forecasts separately and blend these at the end of the run. Rules also include a damping component. Rule 41 from the short-range trend model is presented below:

RULE 41: IF the direction of the basic trend and the direction of the recent trend are not the same OR if the trend agree with one another but differ from causal forces, THEN add 15% to the weight on random walk and subtract it from that on Holt’s and Brown’s

{Explanation: Dissonance calls for conservatism in the trend estimate}

RBF rules are coded as simple production rules in IF… THEN… format with rule conditions being the series features and actions being weight assignments to the four methods listed above.

Corrections to Rule-based Forecasting

Through continued efforts, a revised and corrected set of rules is now available. The corrections to these rules and their impact on performance of RBF are available in the following paper:

Adya, M. (2000). Corrections to rule-based forecasting: findings from a replication. International Journal of Forecasting, 16, 125-128.

The original RBF, as presented in Collopy and Armstrong (1992), relied on 18 features of time series. Since then, the scope of RBF has expanded to now encompass 28 rules for RBF. The table below presents these features and their descriptions.

Feature Categories

RBF Features


Causal forces


The net directional effect of the principal factors acting on the series. Growthexerts an upward force. Decay exerts a downward force. Supporting forces push in direction of historical trend. Opposingforces work against the trend.Regressing forces work towards a mean. When uncertain, forces should be unknown.

Functional form

Multiplicative Additive

Expected pattern of the trend of the series.

Cycles expected

Cycles expected

Regular movement of the series about the basic trend.

Forecast horizon

Forecast horizon

Horizon for which forecasts are being made.

Subject to events

Subject to events


Start-up series

Start-up series

Series provides data for a start-up.

Related to other series

Related to other series


Types of data



Positive values

Only positive values in time series


Bounded values such as percentages and asymptotes

Missing observations

No missing observations in the series



Level is not biased by any other events.




Direction of basic trend

Direction of trend after fitting linear regression to past data.

Direction of recent trend

The direction of trend that results from fitting Holt’s to past data.

Significant basic trend (t>2)

The t-statistic for linear regression is greater than 2.

Length of series


Number of observations

Number of observations in the series.

Time interval (e.g. annual)

Time interval represented between the observations.



Whether seasonality is present in the series.



Coeff. of variation about trend >0.2

Standard deviation divided by the mean for the trend adjusted data.

Basic and recent trends differ

Basic and recent trends not in same direction










Irrelevant early data

Early portion of the series results from a substantially different underlying process.

Suspicious pattern

Series that show a substantial change in recent pattern.

Unstable recent trend

Series that show marked changes in recent trend pattern.

Outliers present

Isolated observation near a 2 std. deviation band of linear regress.

Recent run not long

The last six period-to-period movements are not in same direction.

Near a previous extreme

A last observation that is 90% more than the highest or 110% lower than lowest observation.

Changing basic trend

Underlying trend that is changing over the long run.

Level discontinuities

Changes in the level of the series (steps)

Last observation unusual

Last observation deviates substantially from previous data.

In the original paper by Collopy and Armstrong (1992), eight features were identified by analytical procedures coded in the expert system while the rest relied on the forecaster’s judgment and knowledge of the domain. The judgmental identification of these features, however, was time consuming and inconsistent, often taking over 5 minutes per series. In recent research, we have automated the feature identification process using simple statistical approaches. Consequently, time commitment to feature identification has dropped to under one minute per series without consequential decline in forecast accuracy.

Through extensive sensitivity analyses, we have found that causal forces are one of the key features of RBF. This feature represents the cumulative directional effect of the factors that influence the trends in a time series. As illustrated in the table above, causal forces can be classified in four ways based on their relation to historical trend. Several papers illustrate the use of causal forces in combining time series forecasts.