Method and Author

Forecast

Statistical models

APSA conference, 9/4/2010

Abramowitz

47

Bafumi, Erikson, and Wlezien

51

Campbell

52

Cuzán*

30

Jacobson

43

Lewis-Beck and Tien

22

Within method mean

41

Polls aided by judgment

Cook (9/2/2010)

35

Rothenberg (9/6/2010)*

40

Sabato (9/2/2010)

47

Within method mean

41

Prediction Market

Intrade.com**

43

Mean across methods

42


* The forecast with the midterm elections model.
The forecast with the all elections model is 27 seats.

**Rothenberg told Polly that this is a projection, not a forecast.
Also, Polly took the median of the range of seats projected.

***The contracts are specified in five-seat increments.
The forecast was obtained by taking the median value within
the increment with a probability greater than 50%.




Forecasting elections from the single most important issue: 
Andreas Graefe and Scott Armstrong developed a model for forecasting U.S. Presidential Elections. It provides fast advice on which issues candidates should focus on in their campaign by using information about how voters perceive the candidates’ ability to handle the single most important issue facing the country. It predicted the winner of the past ten elections with an accuracy of 97% (based on an examination of the forecasts on each of the last 100 days prior to each of the last ten U.S. Presidential elections). A working paper version of the paper, which was accepted for publication in the Journal of Behavioral Decision Making, is available here.

Alfred G. Cuzán and Charles M. Bundrick test the idea that equal weights yield better or no worse predictions than optimal weights in two presidential elections forecasting models. See "Predicting Presidential Elections with Equally-Weighted Regressors in Fair's Equation and the Fiscal Model," Political Analysis, 17, 2009, 333-340. For an an "advance access" pdf file, click here.

POLLYPRIZE 2008

IIF will make a $1,000 award to the author(s) of a model hat best predicts the outcome of the 2008 American presidential election. Read more ...

POLLYPRIZE 2006

Carl E. Klarner and Stan Buchanan won the Pollyprize competition. You can access the Panel of Judges report here, their paper (published in Foresight) here, and their data set here .

Fair, Ray C. (2004), "Predicting Electoral College Victory Probabilities from State Probability Data" (PDF)

Fiscal Model Update

by Alfred G. Cuzán and Charles M. Bundrick, 8/01/04.
Ray Fair's 7-31-04 update forecasts that GROWTH in the third quarter of this year will be 2.7 percent and that the number of "good news" quarters through the 15th quarter of President's Bush's term will be 2. Entering these new estimates into the fiscal model (see Cuzán and Bundrick, 2004) yields a forecast of 51.1 for President Bush. This would be the closest victory margin of a sitting president since 1888, when President Cleveland edged out Harrison in the popular vote, only to lose in the Electoral College. Given the model's standard error (1.9) and prediction interval (+ 5), however, one can only conclude from the updated forecast that the election is too close to call. It could go either way.

On Ray Fair's 4-29-04 Forecast

by Alfred G. Cuzán

On April 29, Ray Fair posted an update of his 2004 forecast for Bush's share of the two-party vote. This forecast, an outlier in Polly's Table, is even more optimistic about the President's reelection prospects than the one issued in February. Since then, there has been an up-tick in both the expected GROWTH and INFLATION rates through the third quarter of the year. The net effect is to raise the forecast for President Bush's share of the two-party vote, from 58.58 percent to between 58.74 and 60.4, or to an average of 60 percent (after rounding). The reason for the spread is that two quarters (2003:4 and 2004:1) were only one-tenth of a point below what is considered a GOODNEWS quarter. If both are placed in the GOODNEWS category, this would add another 0.84 percent to the point prediction. Fair sums up his latest forecast thus, "The main message that the equation has been making from the beginning is thus not changed, namely that President Bush is predicted to win by a sizable margin." Actually, this is an understatement: Fair is forecasting nothing less than a landslide victory for President Bush, a reelection margin exceeded only by FDR in 1936, LBJ in 1964, and Richard Nixon in 1972.

Fair's GROWTH and GOODNEWS variables are incorporated into the fiscal model used by Cuzán and Bundrick in making their forecast. Accordingly, the forecast for President Bush's share of the two-party vote obtained with this model has also gone up, from 52.2 to between 52.4 and 53.2 percent, or an average of 53 percent (after rounding). This remains the closest victory margin for a sitting president since Truman beat Dewey in 1948. The principal reason for the disparity between Fair's forecast and that obtained with the fiscal model is that, unlike the former, the latter takes into account the effect of fiscal policy. Unlike most presidents seeking reelection, President Bush has implemented a fiscally expansionary policy. In the fiscal model, this is equivalent to raising the "price" or "fee" which Washington charges the voters for the federal bundle of good and services, a policy which, on average, costs the incumbents about five percent points of the two-party vote

On Ray Fair's 2-05-04 Forecast

by Alfred G. Cuzán

Forecasting aficionados are familiar with Ray Fair's presidential elections model .Estimated over all elections held since 1916, the model consists of seven variables, three economic (two measures of per capita GDP growth and one of inflation) and four political (incumbency, terms in office, party, and war). With this model Fair is able to predict the winner of the presidential election about 80 percent of the time, on average coming within 2.4 percent points of the incumbents' share of the two-party vote. As of February 5, 2004, Fair forecasted that in November President Bush will win just under 59 percent, about what Reagan received in 1984. That is, something like a landslide reelection victory.

However, as Scott Armstrong notes (in an otherwise positive review of Fair's Predicting Presidential Elections and Other Things), Fair's model lacks a policy variable. Thus, it may be of interest to the forecasting community that a more compact model, one that combines several of Fair's variables with a measure of fiscal policy, performs even better than Fair's. Fiscal policy is measured by a binary variable, FISCAL, which takes the value of 1 when it is expansionary and -1 when cutback. Generally, fiscal policy is expansionary if the ratio of federal outlays to GDP grows at the same or faster pace in the current term than it did in the previous term, and cutback if the ratio falls or its growth rate decelerates. In 27 out of the 33 presidential elections held since 1872, or better than 80 percent of the time, incumbents who pursued a cutback policy scored a victory in the two-party vote for president, and those who implemented an expansionary policy met with defeat. (Why this is so is beyond the scope of this note. The interested reader is directed to the paper cited below and to the references included therein. Suffice it to say here that an expansionary fiscal policy is interpreted as an increase in the "price" or "fee" which the federal government charges the economy for its goods and services, with predictable effects on the behavior of voters-cum-consumers.) Also, a model that combines FISCAL with three of Fair's metrics (two of growth and one of duration in office), plus the party of the incumbents, produced more accurate predictions than Fair's. Controlling for those four variables, a switch in fiscal policy from cutback to expansionary costs the incumbent about five percent of the two-party vote.

Which brings us to a forecast for this year's election.< Under Bush, federal outlays as a percent of GDP have grown by about 1.5 percent points. This represents a switch in fiscal policy from the two consecutive cutback Clinton terms. Consequently, the fiscal model forecasts a close contest this year, with Bush edging the Democratic nominee 52 to 48 percent, the smallest victory margin of an incumbent president since Truman beat Dewey in 1948. (See Alfred G. Cuzán and Charles M. Bundrick, "Fiscal Effects on Presidential Elections: A Forecast for 2004." )

Forecasting the 2005 Parliamentary Elections in the United Kingdom

(03/24/05)

The British Journal of Politics and International Relations is about to publish a collection of articles on election forecasting. Five articles apply different models to the forthcoming British election and a sixth casts a skeptical eye on the entire enterprise. Courtesy of that journal, the abstracts are reproduced below.

Election Forecasting: Principles and Practice

Michael S. Lewis-Beck

To forecast an election means to declare the outcome before it happens. Scientific approaches to election forecasting include polls, political stock markets and statistical models. I review these approaches, with an emphasis on the last, since it offers more lead time. Consideration is given to the history and politics of statistical forecasting models of elections. Rules for evaluating such models are offered. Examples of actual models come from the United States, France and the United Kingdom, where this work is rather new. Compared to other approaches, statistical modeling seems a promising method for forecasting elections.

Forecasting Seats from Votes in British General Elections

Paul F. Whiteley

This article develops a forecasting model of seat shares in the House of Commons applied to general election outcomes. The model utilizes past information about party seat shares, together with data from the polls gathered prior to the election, to forecast the number of seats won by the parties. Once it has been estimated the model will be used to make a forecast of the outcome of a possible general election in May 2005. The article starts by focusing on research into translating votes into seats, or the cube rule and its modifications. It then goes on to develop the forecasting model, which is based on electoral and poll data from 1945 to 2001.

Popularity Function Forecasts for the 2005 UK General Election

David Sanders

The article provides a set of contingent forecasts for the forthcoming UK general election. The forecasts are based on popularity function derived from monthly time series data covering the period 1997-2004. On most likely assumptions, the forecasts produce a clear Labour victory in the early summer of 2005, with the Liberal Democrats increasing their vote share by roughly four percentage points.

A Political Economy Forecast for the 2005 British General Election

Eric Bélanger, Michael S. Lewis-Beck and Richard Nadeau
(the E in ERIC and the first E in Belanger have accents over them)

Recently, we proposed an original statistical model for forecasting general elections in the United Kingdom, based on the observation of a few key indicators of the political and economic system. That vote function model was tested against the results of the 2001 general election. Here we evaluate the results of that test, and offer an appropriately revised model for the forecasting of the upcoming 2005 general election. According to our forecast, a Labour victory appears the most likely outcome.

Forecasting the 2005 General Election: A Neural Network Approach

Roman Borisyuk, Galina Borisyuk, Colin Rallings and Michael Thrasher

Although neural networks are increasingly used in a variety of disciplines there are few applications in political science. Approaches to electoral forecasting traditionally employ some form of linear regression modeling. By contrast, neural networks offer the opportunity to consider also the non-linear aspects of the process, promising a better performance, efficacy and flexibility. The initial development of this approach preceded the 2001 general election and models correctly predicted a Labour victory. The original data used for training and testing the network were based on the responses of two experts to a set of questions covering each general election held since 1835 up to 1997. To bring the model up to date, 2001 election data were added to the training set and two separate neural networks were trained using the views of our original two experts. To generate a forecast for the forthcoming general election, answers to the same questions about the performance of parties during the current parliament, obtained from a further 35 expert respondents, were offered to the neural networks. Both models, with slightly different probabilities, forecast another Labour victory. Modeling electoral forecasts using neural networks is at an early stage of development but the method is to be adapted to forecast party shares in local council elections. The greater frequency of such elections will offer better opportunities for training and testing the neural networks.

Election Forecasting: A Skeptical View

Cees van der Eijk

This brief note contains some doubts about the predominant kind of statistical election forecasting that is discussed in this issue. It is not meant to be a full-scale critique of the approach, the methods and the models that have been reported in the literature. Rather, it is intended to be an attempt to explicate some of the recurring feeling of disenchantment that can be experienced every time we come across these forecasts. . . . [T]his note briefly discusses the theoretical core of statistical forecasting models, and argues that its theoretical foundations are unsatisfactory. It then discusses the implausibility of the functional specification of the core specifications of forecasting models. It then concludes with some comments on the theoretical scope of the forecasting tradition. . . .