Share this post
FaceBook  Twitter  

The M Competitions started more than 40 years ago and have attained global coverage and respectable reputation in both the academic world and among practitioners. It all started in 1979 when I published a paper with Michel Hibon in the Journal of the Royal Statistical Society. It was followed by the first M Competition whose results were published in 1982 and included 111 series, the M2 which was published in 1993 and aimed at incorporating judgmental inputs to the statistical forecasts while the M3, was published in 2000, included 3003 series to base its findings, and make its recommendations. Each competition added to our knowledge about forecasting and provided concrete evidence on how firms can benefit by the more scientific use of forecasting. The last M Competition that ended five months ago has expanded its coverage and scope by including 100,000 series and having 49 individual participants and teams from around the world predicting these series and providing estimates of uncertainty.

The findings of the M4 Competition have provided a wealth of theoretical and practical information for improving the accuracy of forecasts, for assessing uncertainty more precisely while also proving that new forecasting methodologies can be utilized successfully, including a new, hybrid approach that significantly improved both accuracy and uncertainty. The 100,000 series used in this competition cover six application domains (macro, micro, demographic, industry, financial and others) and six time frequencies (yearly, quarterly, monthly, weekly, daily and hourly) that allow comparisons and suggesting how to select the more accurate methods for each of the subcategories of applications and time frequencies. Organizations can benefits by following the conclusions of the competition that proves empirically the most appropriate method to use for the specific forecasting requirements of business organizations.

A paper describing the initial findings of the M4 Competition can be downloaded free from Science Direct https://www.sciencedirect.com/science/article/pii/S0169207018300785 

A special issue of the International Journal of Forecasting, covering all aspects of the M4 Competition is under preparation and will be ready at the next International Symposium of Forecasting to be held in Thessaloniki, Greece in June next year. Below is a Table from the detailed paper describing the M4 and its findings, currently under review to be included in the special issue, that shows the accuracy among the four M Competitions and some interesting statistics demonstrating the consistency of the results over a period that spans nearly four decades.

Table: The accuracy of four standard forecasting methods across the four M competitions, plus the best one in each competition

 

         Methods

MAPE

sMAPE

M1: 1982

M2: 1995

M3: 2000

M4: 2018

Naïve 2

17.8

13.3

15.5

13.6

Single

16.8

11.9

14.3

13.1

Damped

NA

12.8

13.7

12.7

Comb

NA

11.7

13.5

12.6

ARIMA

18.0

16.0

14.0

12.7

Best Method

Parzen (ARARMA)

Comb

Theta

Hybrid

15.4

11.0

13.0

11.4

% improvement of the best method over the Naïve 2

13.5

17.3

15.9

16.2

% improvement of the best method over the Comb

NA

0.0

3.8

9.4

Number of Series

111

23

3,003

100,000

In order to disseminate the findings of the M4 Competition to as wide an audience as possible, Spyros Makridakis is organizing a conference to elaborate on its findings, discuss their practical implications as well as explain how they can be applied by business and other organizations in their effort to improve forecasting accuracy and correctly assess future uncertainty.

The M4 Conference Program includes distinguished speakers from major software/ technology companies (Google, Microsoft, Amazon, Uber and SAS) as well as known academics from top-level universities. It features the presentation of the three most accurate methods of the M4 Competition personally by their developers, including the hybrid approach developed by Slawek Smyl of Uber that achieved the top spot, also discussing how their methods can be implemented, as their code is available on GitHub. The conference will covers all critical areas of forecasting, including combining methods and introducing judgmental adjustments, paying special emphasis on the comparisons of Machine Learning and Statistical forecasting methods as well as how to assess uncertainty as precisely as possible.

The rich conference program includes a keynote address by Nassim Nicholas Taleb (the author of the Black Swan and Skin in the Game) who will discuss uncertainty in forecasting and Spyros Makridakis who will present the major findings of the M4 Competition and explain how organizations can benefit by such findings. There are also two invited speaker: Professor Scott Armstrong of Wharton who will talk about "Data Models Versus Knowledge Models in Forecasting", and Andrea Pasqua, from Uber, who will talk about Forecasting at Uber. In addition, to the distinguished speakers there will also be panel discussions covering major forecasting issues.

The Conference will be held at the elegant venue of Tribeca Rooftop with an excellent view of Manhattan. For information about the conference and registration visit: www.mcompetitions.ac.cy