In early 2007 the Intergovernmental Panel on Climate Change released long-term forecasts of dramatic global warming, caused by human activity, that they predicted would cause serious harm to many people. In the first public policy forecasting audit to appear in this SIG, Scott Armstrong and Kesten Green present the findings of their audit of the IPCC forecasts of global average temperature. They found the IPCC forecasts have no validity and conclude that the there is no more reason to expect global warming over the next 90 years than there is to expect global cooling. It would therefore be foolish and extremely costly to base public policy on the IPCC forecasts.

Scott Armstrong presented the Armstrong and Green paper at a keynote session at the International Symposium on Forecasting on Wednesday June 27. Following in the tradition of Julian's Simon Bet, Scott Armstrong announced a $20,000 Global Warming Challenge (each side to post $10,000 to go to a charitable cause nominated by the winner) that he will be able to more accurately forecast climate change than can any current fully disclosed climate model.

News and the opportunity to share your views about the Global Warming Challenge are available at theclimatebet.com blog. The blog has had more than 30,000 hits since it was established to cover the Challenge.

The Armstrong and Green paper is available in full text. The authors used the Forecasting Audit Software available on the Forecasting Principles site to evaluate the IPCC forecasting procedures.

Climate scientist Jos de Laat of the Royal Dutch Meteorological Institute wrote of the paper:

“I very much agree with your statement that 'the forecasts in the report ... present the opinions of scientists transformed by mathematics and obscured by complex writing', I don't think that many climate scientists are willing to admit this… I was quite surprised, even a little bit disturbed, to learn that there exists a research field devoted to the science of prediction. I have a formal education in climate science (University degree, BS in physics, MS in Meteorology and Oceanography, PhD in climate science), so I've been around for some time now, yet I don't recall anyone ever mentioning your research area.”

Forecaster Kjell Stordahl in a note for Oracle observed that the IPCC’s physical science report presents temperature data in a selective fashion and that the GCM models they rely on for forecasting exclude important variables such as Solar activity. Regarding the validity of the IPCC forecasts, he wrote:

“I have added the real temperature data 2000-2007 in black as a horizontal line in the figure. We know for sure that [concentrations of CO2 in the atmosphere] have increased compared with the 2000 level. Hence [the “constant concentrations”] scenario is not relevant in comparisons. However, we see that the real temperature 2000 -2007 is lower than IPCC’s temperature forecasts and the confidence/uncertainty limits even for this scenario.“

Peer review and independent audits are welcome, and will be posted if the site guidelines are followed. In addition, all peer review must include the authors' names, positions, emails, and any relationship that might be construed as a source of bias.

David Henderson’s Critique of the IPCC Process and Description of Bias Among Policymakers

 

 

Professor David Henderson is a former Head of Economics and Statistics at the OECD. He investigated the IPCC process and concluded that it is neither objective nor authoritative but is “biased towards alarm”.

Henderson, D. (2007) “Governments and climate change issues: The case for rethinking.” World Economics, 7(4), 183-228. Available in full text.

Henderson has also written an op-ed discussing the issue: Henderson, D. (2007). Misplaced trust. Wall Street Journal (Europe), 11 October, 13 (bottom half). Available in full text.

Henderson has extended his analysis to describe the bias towards alarm over climate change among policymakers and their advisors.

Henderson, D. (2007). New light or fixed presumptions? The OECD, the IMF and the treatment of climate change issues. World Economics, 8, 203-221. Available in full text.

On 22 April 2009 David Henderson gave a talk at the Oxford Business & Environment Conference 2009, "Beyond Kyoto: Green Innovation and Enterprise for the 21st Century", in which he presented the contrarian analysis in a paper titled "Climate Change Issues: A Dissenting Voice".

Green and Armstrong Call for Scientific Forecasts of Sea Levels

 

Dire consequences have been predicted to arise from warming of the Earth in coming decades of the 21st Century. Enormous sea level rises is one of the more dramatic forecasts. Sea-level forecasts are currently based on experts’ judgments of what will happen. These judgments are in turn based on experts’ predictions of global warming. The article made no reference to scientific forecasts. As shown in Green and Armstrong (2007) experts’ forecasts have no validity in situations characterized by high complexity, high uncertainty, and poor feedback. Numerous other scientists also criticized this approach.

To date we are unaware of any forecasts of sea levels that adhere to proper (scientific) forecasting methodology and our quick search on Google Scholar came up short. If such forecasts are available, please provide citations and support as to their validity. As a first step, it would be useful to summarize studies that extrapolate long-term trends; this summary could provide a benchmark for comparison with other studies.

We will provide free access to them at publicpolicyforecasting.com and request commentary at theclimatebet.com. Media outlets should be clear when they are reporting on scientific work and when they are reporting on the opinions held by some scientists. Without scientific support for their forecasting methods, the concerns of scientists should not be used as a basis for public policy.

This email address is being protected from spambots. You need JavaScript enabled to view it. and This email address is being protected from spambots. You need JavaScript enabled to view it.

24 September, 2007.

Scott Armstrong to Speak At American Enterprise Institute on Strengths and Weaknesses of Climate Models

Tuesday, February 26, 2008 9:00 AM - 12:15 PM at Wohlstetter Conference Center, Twelfth Floor, AEI, 1150 Seventeenth Street, N.W., Washington, D.C. 20036

The spectrum of environmental policy challenges—from climate change to nuclear waste storage to coastal shoreline erosion—depend on sophisticated forecasting and modeling techniques. How sound and reliable are our environmental models? What are the inherent limits of environmental science when attempting to forecast the future under different policy regimes? Are there ways to improve environmental forecasting for policymaking purposes?

 

Daniel Botkin, a research professor in the Department of Ecology, Evolution and Marine Biology at the University of California, Santa Barbara, and Orrin Pilkey, the James B. Duke Professor Emeritus of Geology at Duke University, will discuss past performance in environmental modeling. J. Scott Armstrong, a forecasting expert and professor of marketing at the Wharton School, and Jim Manzi, CEO of Applied Predictive Technologies, will discuss the strengths and weaknesses of climate models. Stephen F. Hayward, AEI’s F. K. Weyerhaeuser Fellow, and Kenneth P. Green, a resident scholar at AEI, will moderate. Registration information is available at the AEI site.

Monckton’s Apocalypse? No! speech provides audit of inputs to the dire predictions in An Inconvenient Truth

 

It is important to ensure the data and judgments that are the inputs to a forecasting process are adequate, valid, and unbiased. Christopher Monckton, in his speech at the Cambridge Union in 2007, provided a point-by-point audit of the inputs to the predictions made by Al Gore in An Inconvenient Truth. The speech, with question and answer session, is available on DVD from the Science and Public Policy Institute site.

Kesten Green to speak on “Scientific Forecasting of Climate Change” at The 2008 International Conference on Climate Change

 

The conference has the goals of bringing together leading scientists, economists, and policy experts with skeptical views on climate change; sponsoring presentations and papers that make genuine contributions to the global debate over climate change; sharing the results of the conference with policymakers, civic and business leaders, and the interested public; and setting the groundwork for conferences and publications that support sound science on the issue of climate change. The conference is sponsored by The Heartland Institute, and is being held at the Marriott New York Marquis Times Square Hotel from March the 2nd to 4th. Visit the conference site.

Kesten Green submits NZ Climate Change Bill is not based on science; would cause harm

In response to forecasts of dangerous global warming, the New Zealand Government are proposing to introduce a greenhouse gas emissions trading scheme and to restrict electricity generation using coal and gas. Kesten Green delivered an oral submission to the Select Committee considering the Climate Change (Emissions Trading and Renewable Preference) Bill. He told the Select Committee that forecasts of dangerous manmade global warming were not scientific and that Bill was not necessary or desirable.

NZ Government holds hearings on Emissions Trading Scheme: Bob Carter questions cost/benefit forecasts; Kesten Green submits the Scheme is based on spurious forecasts of dangerous warming

 

The New Zealand government's Emissions Trading Scheme Review Committee has been hearing submission on the Scheme, which was created under legislation passed by the previous government toward the end of their term in office. The Scheme, for trading carbon dioxide emissions rights, is based on the false premise that there exist valid forecasts of dangerous warming resulting from human emissions of carbon dioxide, that warming would on balance be bad, and that policies such as New Zealand's Scheme would deliver a net benefit. On 4 May 2009, Kesten Green submitted that the best forecast of global mean temperatures available was that they would not change. In his oral submission, he appealed to the Committee not to implement the Scheme. Professor Bob Carter earlier submitted that the Scheme could not be justified on the basis of current knowledge of likley costs and benefits.

Auditing the information used by climate forecasters: Steve McIntyre’s Climate Audit website

 

There are 19 forecasting principles that provide guidance on identifying, collecting, and preparing data to be used for forecasting. These principles include 3.3 Avoid biased data sources, 3.4 Use diverse sources of data, 4.1 Use unbiased and systematic procedures to collect data, 4.2 Ensure that information is reliable and that measurement error is low, 4.3 Ensure that the information is valid, 4.4 Obtain all of the important data, 4.6 Obtain the most recent data, 5.1 Clean the data, and 5.4 Adjust for unsystematic past events. While some of these principles at least may appear to be common sense, they are nevertheless often violated in practice with the consequence that forecasts are poor or even invalid. The Climate Audit site reports the findings of the often painstaking detective work required to determine whether the data used by climate scientists are consistent with these principles.

Complex models of climate at odds with forecasting principles predict temperatures will rocket… or plummet

 

When the situation is complex and there is uncertainty about causal relationships, forecasting principle 6.6 dictates that forecasters should “Use few variables and simple relationships”. The opposite approach was used in the Intergovernmental Panel of Climate Change models, and there have been calls ( 1 , 2) for even more money to enable modelers to create models that are even more complex. Patrick Frank, in an article in Skeptic (2008, 14:1) titled “A climate of belief”, showed that a very simple model with major-non-water-vapor-greenhouse-gasses (i.e. CO2, methane, and N2O) as the only causal variable and using the IPCC assumptions about the direct and indirect effects of changes in atmospheric GHG concentrations makes predictions of global average temperatures that are closer to an “ensemble average” of forecasts from state-of-the-art climate models used by the IPCC than do any of the individual complex models. In other words, putting aside whether the models are valid or the forecasts are accurate, or not, there is no need to have complex models in order to make those forecasts.

Frank’s simple model illustrates part of the purpose of principle 6.6; namely to aid understanding and reduce forecasting costs. We aren’t sure what the cost of the complex relative to the simple modeling efforts were but, given the number of people and computer time involved in the complex models, a ratio of 1 million to 1 is a conservative guess. Frank’s simple model is simple enough for anyone to understand. That’s a good thing, because the modeler’s assumption are clear and can be tested and disputed, and the disputation can be understood by others. This makes it easier to reject a false model and thereby to advance scientific understanding. Thus the use of simple models reduces mistakes, another purpose of the principle.

The primary purpose of many of the forecasting principles is naturally enough to improve accuracy; principle 6.6 is no exception. Frank demonstrates that the IPCC grossly under-reports the cumulative uncertainty of the model forecasts. The figure below from Frank’s article shows that, when proper allowance is made for uncertainty about the effects of clouds and greenhouse gases, the nominal bounds of errors in the complex IPCC models’ forecasts of temperature change by 2100 are plus or minus 120°C. As a consequence, the IPCC projections contain no useful information. It would be foolish indeed to base public policy on forecasts from such models.

 

Patrick Frank’s article is available from the Skeptic site.

Global warming, trends and the random walk: choosing an appropriate benchmark

A contrary view as to the appropriateness of the 'no change' benchmark proposed by Green and Armstrong (2007) in their paper "Global warming; forecasts by scientists versus scientific forecasts" was put by us (Fildes and Kourentzes) at the International Symposium on Forecasting in Nice. While we agree with Green and Armstrong’s core argument as to the need to apply rigorous standards of forecast evaluation to the IPCC forecasts of temperature change, our presentation focused on the choice of an appropriate benchmark. In our presentation we considered the IPCC's approach to validation in the light of some key forecasting principles. We then applied them to produce forecasts of global temperature over the next 10 and 20 years. We used standard methods of forecast evaluation including the use of a rolling origin. The methods include exponential smoothing and Holt's linear trend. The forecasts, evaluated ex ante over the last 50 years, based on quite different methods to the large scale modeling efforts of the IPCC, support the broad picture of warming. The Green and Armstrong conclusion that cooling is equally likely is unsustainable. Yes, it is getting hotter - or at least that's our best benchmark forecast.

Robert Fildes and Nikos Kourentzes, Lancaster Centre for Forecasting
16 August 2008.

 

Response

We do not agree with Fildes and Kourentzes's suggestion that an extrapolation of the smoothed trend in late-20th Century temperatures is a superior benchmark to the ‘no change’ forecast. A benchmark forecast is an expectation that is reasonable given the situation. It is evident from the data that average global temperature can drift up or down for years, but will then reverse direction. There is insufficient knowledge about climate to say why such temperature trends have occurred and reversed in the past and when they will occur in future. Many years of research on forecasting have shown that it is not reasonable to extrapolate trends in such a situation. While it is not surprising that trend extrapolation methods outperform a "no change" forecast when they are applied to a period with an evident trend, they are a not a good guide to what will happen next in a situation characterized by high uncertainty and complexity. Indeed, the best available data on global average temperature (obtained from satellites) show a declining trend since 1998.

Kesten Green and Scott Armstrong
5 September 2008

Independent summary for policymakers of the IPCC Fourth Asssessment Report

Out of concern that neither the IPCC science authors nor the authors of the IPCC's summary for policymakers were free from bias or imune to political imperatives, the Fraser Institute commissed an Independent Summary for Policymakers. It is important in forecasting to obtain unbiased judgements and to ensure that forecasting is free from politics, and so we make the ISPM report available here

Climate Change Reconsidered: The report of the Nongovernmental International Panel on Climate Change

This more than 700 page report on the science of climate change and its effects by lead authors Craig Idso and Fred Singer was published in June 2009. The first chapter "Global Climate Models and Their Limitations" discusses the problems of forecasting climate. Other chapters are also relevant to forecasting as they examine the available data, the evidence on relationships between proposed causal variables and climate, and the relationships between climate changes and the health and well-being of people and other species. The report is available here.

 

 

 

Paper presented at the ISF 2009 in Hong Kong on "Forecasting for climate policy: C02, global temperatures, and alarms"

Scott Armstrong presented a paper co-authored with Kesten Green, Andreas Graefe, and Willie Soon at the International Symposium on Forecasting in June that examined some of the lessons for climate policy from evidence-based forecasting. The authors described the lack of scientific long-term forecasts of global temperatures, the impacts of temperature changes, and the effects of policies. The paper explained the need for simple methods and conservative forecasts in the face of uncertainty and complexity and pointed out that simple no-change benchmark forecasts are sufficiently accurate for policy decisions. In contrast, simple causal models with CO2 as the policy variable are not credible. Prediction markets for temperatures in three and ten years time agree that the no-change forecast is the more likely outcome than the IPCC 0.03C per annum forecast. Finally, similar (analogous) alarms in the past identified by the authors and others turned out to be false alarms. The slides for the talk are available as a PowerPoint file and as a PDF file.

Letter to Environmental Protection Agency regarding regulation of greenhouse gases

Thirty-five scientists signed a letter addressed to the EPA Administrator, the Honorable Lisa P. Jackson, on October 7 2009 expressing concern that proposed rulemaking on the regulation of greenhouse gases would be based on unscientific forecasting procedures. The letter asked:

  1. Is the Earth’s climate changing in an unusual or anomalous fashion?
  2. Does the science permit rejection of the hypothesis that CO2 is only a minor player in the Earth’s climate system?
  3. Can climate models that assume CO2 is a key determinant of climate change provide forecasts of future conditions that are adequate for policy analysis?
  4. Can we reject the hypothesis that the primary drivers of the Earth’s climate system will continue to be natural (non-anthropogenic) forces and internal climate variability?

In respect to question #3, the letter's authors further ask:

  • Do global climate models properly handle “feedbacks” in the Earth’s climate system?
  • Do global climate models perform well in simulating the climate and compare well when forecasting the impact of increased levels of CO2?
  • Have modelers followed the well-documented and validated rules set forth by academic forecasting professionals?
  • Did these models forecast the recent decline in temperatures?

“Evidence in the literature would strongly suggest that many respected scientists would answer ‘no’ to each of these four questions, which may well eliminate any possible rationale for regulating CO2.”


A copy of the letter, including the signatories is available here.

Benchmark forecast for global mean temperatures: No change for 100 years

In their new paper in a special section of the International Journal of Forecasting on decision making and planning under low levels of predictability (edited by Spyros Makridakis and Nassim Taleb), Kesten Green, Scott Armstrong, and Willie Soon asked whether it is possible to make useful forecasts for policy makers about changes in global climate up to 100 years ahead. Using evidence-based forecasting principles, they chose the "no-change" forecast as their benchmark and found that, with mean absolute errors of 0.18C for 20 years ahead and 0.24C for 50 years ahead, even if perfectly accurate forecasts of global mean temperature were possible they would not be more useful for policy makers.

They nevertheless demonstrated the use of benchmarking by assessing the relative performance of the Intergovernmental Panel on Climate Change's medium projection of +0.03C per year against historical data from 1851 to 1975. The errors from the IPCC projections were more than seven times greater than the errors from the simple benchmark that assumed no change in temperatures.

Their paper* is available here.

*Green, K. C., Armstrong, J. S., & Soon, W. (2009). Validity of climate change forecasting for public policy decision making. International Journal of Forecasting, 25, 826-832.

History shows manmade global warming alarm to be false – but that harmful policies will persist.

Using a forecasting method that they have developed, Dr. J. Scott Armstrong from the Wharton School and Dr. Kesten C. Green from the International Graduate School of Business at the University of South Australia conclude that alarm over “dangerous manmade global warming” is the latest example of a common social phenomenon involving alarming but unscientific forecasts that prove to be wrong. This is a preliminary finding from the “global warming analogies forecasting project.” The researchers stressed that the findings are preliminary because they are still collecting and coding information on similar situations from the past.

Armstrong and Green used a method known as “structured analogies.” For the global warming analogies forecasting project, the method first involved conducting a wide and objective search for situations similar to the alarm over forecasts of dangerous manmade global warming. For each analogous situation the forecasting procedures used by the alarmists and the actual outcomes of the situations were coded. The structured analogies procedures had previously been shown to provide excellent forecasts compared to those from commonly used alternative procedures.

To date, 71 situations have been proposed and 26 of them were found to meet all criteria of similarity. Of the latter, none were based on forecasts from scientific procedures. Instead they were based on dramatic speculation of one sort or another.

Typically, the alarmists recommend government action, and governments usually respond. They did so in 25 of the 26 analogous situations, and government took action in 23.

We asked: How many of the 26 analogous alarming forecasts were accurate?

The answer is “none”.

In how many of the 23 analogies were the government solutions shown to be helpful?

None. In fact, in 20 situations there was substantial long-term harm from the government solutions.

The authors are hopeful that the continuing evidence on the anti-scientific procedures used by people involved in the manmade global warming alarmist movement, such as has been exposed by ClimateGate, will help to reduce the damage from the alarm in the long run. However, the analogies offered little hope on that score. Most of the previous alarms, such as over DDT and electromagnetic fields, continued to cause substantial harm many years after they had been shown to be false.

Julian Simon and others had suggested that such a pattern exists for forecasts of doom, but we were surprised at the strength of our findings. In retrospect, the findings seem less surprising. Extreme events are difficult to forecast, especially in complex and uncertain situations. So the application of unscientific forecasting procedures supported by politics would be unlikely to produce useful forecasts.

The authors stress that this is an early progress report. They hope to stimulate global warming alarmists to propse analogies that support their forecasts. They also suggest that all important public policy forecasts would benefit by using the structured analogies method.

For the latest summary of this research, go to their paper, “Effects of the Global Warming Alarm.”

For further information, contact This email address is being protected from spambots. You need JavaScript enabled to view it. or This email address is being protected from spambots. You need JavaScript enabled to view it.

 

Climate scientists question whether policy makers have properly forecast all the costs and benefits of policies to reduce human carbon dioxide emissions


A paper by Harvard Astrophysicist Willie Soon and Delaware State Climatologist David Legates calls into question three key legs of climate change policy:

  1. To what extent it is possible for people to control global and regional climate.
  2. Whether all costs and benefits of a changing climate have been properly forecast and weighed up.
  3. Whether carbon trading schemes will lead to any reduction in the level of atmospheric carbon dioxide.

A copy of their paper is available here.

Comments on the United States Department of State’s U.S. Climate Action Report 2010, 5th ed.


Scott Armstrong, Kesten Green, and Willie Soon made a submission titled "Global Warming Alarm Based on Faulty Forecasting Procedures" on the State Department's report:

Our research findings challenge the basic assumptions of the State Department’s Fifth U.S. Climate Action Report (CAR 2010). The alarming forecasts of dangerous manmade global warming are not the product of proper scientific evidence-based forecasting methods. Furthermore, there have been no validation studies to support a belief that the forecasting procedures used were nevertheless appropriate for the situation. As a consequence, alarming forecasts of global warming are merely the opinions of some scientists and, for a situation as complicated and poorly understood as global climate, such opinions are unlikely to be as accurate as forecasts that global temperatures will remain much the same as they have been over recent years. Using proper forecasting procedures we predict that the global warming alarm will prove false and that government actions in response to the alarm will be shown to have been harmful.

Whether climate will change over the 21st Century, by how much, in what direction, to what effect, and what if anything people could and should do about any changes are all forecasting problems. Given that policy makers currently do not have access to scientific forecasts for any of these, the policies that have been proposed with the avowed purpose of reducing dangerous manmade global warming—such as are described in CAR 2010 Chapters 4, 5, 6, and 7—are likely to cause serious and unnecessary harm.

In this comment on CAR 2010, we summarize findings from our research on forecasting climate. Most of our findings have been published in the peer-reviewed literature and all have been presented at scientific meetings. They are easily accessible on the Internet and we provide links to them.

A copy of their submission is available here.

Checking climate forecasts

Mark Lawson, a senior journalist at the Australian Financial Review and author of the book A guide to climate change lunacy - bad forecasting, terrible solutions has written:

The four major reports produced by the Intergovernmental Panel on Climate Change since 1990 have repeatedly forecast a dire temperature future for the earth. But despite nearly 20 years of consuming these forecasts, government officials and politicians have not thought to ask for an independent check of whether any of the IPCC’s forecasts to date have been borne out. In other words, have any of its dire forecasts been proved right by events?

Instead of taking this suprisingly simple step which, at its basic level, requires common sense and some graph work, officials, politicians and the public have taken the assurance of the IPCC that the forecasts have been peer-reviewed. That is, other scientists have checked the forecasts and approved them. But peer review simply has no relevence to forecasts. All it does is assure us, the consumers of these forecast, that they have been made according to the scientic orthodoxy of the time. We still have no idea whether the orthodoxy is right, or whether the forecasting system itself is of any use.

We can make our own, commonsense evaluation, and that evaluation is far from favourable to the IPCC... [more]

Evaluation of Australia CO2 emission reduction targets

Roger A. Pielke, Jr. of the University of Colorado has conducted an evaluation of CO2emission reduction targets and timetables proposed for Australia. His analysis, to be published in the journal Environmental Science & Policy, concludes that to achieve the goals would require "dozens of new nuclear power stations or thousands of new solar thermal plants within the next decade." The paper is available here, and links to analyses on U.K., Japanese, and other proposals are available from Pielke's blog.

"Let's Deal in Science and Facts" - A letter to the Wall Street Journal

Bjorn Lomborg ("Can Anything Serious Happen in Cancun?", op-ed, Nov. 12) claims that government spending on global warming policies is wasted, but he assumes that global warming caused by carbon dioxide is a fact. It is not. We base this statement not on the opinions of 31,000 American scientists who signed a public statement rejecting this warming hypothesis (the "Oregon Petition"), but rather because the forecasts of global warming were derived from faulty procedures.

We published a peer-reviewed paper showing that the forecasting procedures used by the U.N.'s Intergovernmental Panel on Climate Change violated 72 of 89 relevant principles (e.g., "provide full disclosure of methods"). The IPCC has been unable to explain why it violated such principles. In response, we developed a model that follows the principles. Because the climate is complex and poorly understood, our model predicts that global average temperatures will not change.

In testing the models on global temperature data since 1850, we found that the long-range (91-to-100-years ahead) forecast errors from the IPCC's projection were 12 times larger than the errors from our simple model.

Mr. Lomborg concludes there are better ways for governments to spend the funds devoted to global warming. We suggest this money should instead be returned to taxpayers.

J. Scott Armstrong, Kesten C. Green. Willie Soon.

See the letter on the WSJ site here.

 

 

 

 

 

 

 

 

"The EPA Permitorium" - Wall Street Journal looks at EPA regulation without science

In a November 22 2010 article, the Wall Street Journal decried the fact that the Environmental Protection Agency "has proposed or finalized 29 major regulations and 172 major policy rules" under the Obama administration with no evidence of proper care. We suggest that all the forecast costs and benefits of these regulations should be assessed against scientific forecasting principles. The Forecasting Audit Software provides a convenient way for people to do this. The Wall Street Journal Article is available, here.

"False prophecies beget faulty policies" - Willie Soon and Madhav Khandekar

The annual climate summit opened in Cancun, Mexico this week. A few days earlier, while releasing a new report, Indian Minister of Environment and Forests Jairam Ramesh emphasized: “It is imperative” that India has “sound, evidence-based assessments on the impacts of climate change.”

Not surprisingly, the report, “Climate Change and India: A sectoral and regional analysis for the 2030s,” claims India will soon be able to forecast the timing and intensity of future monsoons that are so critical to its agricultural base.

Could 250 of India’s top scientists be wrong when they say their computers will soon be able to predict summer monsoon rainfall during the 2030s, based on projected carbon dioxide trends? Do “scenarios” generated by climate models really constitute “sound, evidence-based assessments”? Are attempts to predict monsoons and other climate events any more valid for ten or twenty years in the future, than for a century away?

We do not believe it is yet possible to forecast future monsoons, despite more than two centuries of scientific research, or the claims and efforts of these excellent scientists. The Indian summer monsoonal rainfall remains notoriously unpredictable, because it is determined by the interaction of numerous changing and competing factors, including: ocean currents and temperatures, sea surface temperature and wind conditions in the vast Indian and Western Pacific Ocean, phases of the El Nino Southern Oscillation in the equatorial Pacific, the Eurasian and Himalayan winter snow covers, solar energy output, and even wind direction and speed in the equatorial stratosphere some 30-50 kilometers (19-31 miles) aloft.

Relying on computer climate models has one well-known side effect: Garbage in, gospel out. Current gospel certainly says CO2 rules the climate, but any role played by carbon dioxide in monsoon activity is almost certainly dwarfed by these other, major influences. Computer climate models have simply failed to confirm current climate observations, or project future climatic changes and impacts. More...

"Wind power: questionable benefits, concealed impacts" - Paul Driessen

Energy, shale gas, hydraulic fracturing and wind power are all much in the news – especially as events in the Middle East continue to unfold, and countries realize new drilling technologies have unlocked previously undreamed natural gas riches in formerly inaccessible formations deep underground. This natural gas is a true game changer.

One important aspect of the ongoing debate over energy and economic benefits versus environmental and groundwater risks has not been addressed before, however. It is the comparison between still-undetermined and speculative impacts of hydraulic fracturing on water supplies versus the impacts of “environment-friendly” wind turbines on a wide variety of ecological values.

The article presents the first analysis of these risks that I am aware of. It also argues that EPA Administrator Lisa Jackson’s proposal to apply a “life-cycle” or “cradle-to-grave” assessment of shale gas “fracking” technologies would be equally … and indeed more … valuable for evaluating wind power costs and benefits. The article is available here.


We will gladly publish careful forecasting audits on these pages.

Armstrong and Green on climate models in NIPCC's Climate Change Reconsidered II

A summary of the Green and Armstrong critique of the use of complex mathematical models for forecasting long term climate change is published in the Nongovernmental Panel on Climate Change's Climate Change Reconsidered II: Physical Science (2013). Links to their section of the book and to the whole book are available from the reference below:

Armstrong, J. S., & Green, K. C. (2013). Global climate models and their limitations: Model simulation and forecasting - Methods and principles. pp. 14-17 in Idso, C. D., Carter, R. M., & Singer, S. F. (Eds.), Climate Change Reconsidered II: Physical Science. Chicago, IL: The Heartland Institute.

Occasional Forecasting Principle

 

7.3 Be conservative in situations of high uncertainty or instability

Description:

To the extent that uncertainties and instabilities occur in the data or in expectations about the future, reduce changes in time-series forecasts.

Purpose:

To improve forecast accuracy.

Conditions:

This applies when the data contain much measurement error, high variability about the trend line has occurred or is expected, instabilities have occurred or are expected, or the forecast goes outside the range of the historical data.

Discussion:

Forecasts of dramatic global warming violate this principle. There is much uncertainty about the measurement of global temperature due to, for example, the coverage of weather stations and heat island effects in modern data, and the reliability of proxy temperature measures such as are obtained from ice cores and tree rings. Alternative measures have been proposed, but are not widely used in forecasting. Temperatures vary greatly over periods of hours and days as well as years, decades and centuries. Instabilities occur due to unpredictable events including, for example, volcanic eruptions and poorly understood phenomena such as El Nino weather patterns. Uncertainty over and instability in global temperatures mean that forecasts that depart dramatically from recent and longer term trends, such as those of dramatic global warming, cannot be justified. 

The Public Policy Forecasting special interest group (publicpolicyforecasting.com) has been established to provide a platform for the rational analysis of governments' policies.

Forecasting is more important for the public sector than for the private sector because public policy involves coercion, can result in large changes, and is not guided by prices. The injunction to "first, do no harm" is therefore appropriate for public policy decision making. Scientific forecasting can help decision makers to choose the best policies.

The publicpolicyforecasting.com pages will include evidence-based assessments of the forecasting procedures behind major policy initiatives such as occur when policies change in areas such as gun control, capital punishment, climate change, immigration, public construction projects, and minimum wage laws. The primary means of analysis will be forecasting audits.

Audits are sought by those who prepare forecasts relevant to policy changes. In addition, audits are invited by people other than those involved with the forecast.

Audits completed to date

The Forecasting Problem

To determine the best policies to implement now to deal with the social or physical environment of the future, a policy maker should obtain forecasts and prediction intervals for each of the following:

  1. What will the physical or social environment of interest be like in the future in the absence of any policy change?
  2. If reliable forecasts of can be obtained and the forecasts are for substantive changes, then it would be necessary to forecast the effects of the changes on the health of living things and on the health and wealth of humans.
  3. If reliable forecasts of the effects of changed future environment on the health of living things and on the health and wealth of humans can be obtained and the forecasts are for substantial harmful effects, then it would be necessary to forecast the costs and benefits of alternative policy proposals. For a proper assessment, costs and benefits must be comprehensive.
  4. If reliable forecasts of the costs and benefits of alternative policy proposals can be obtained and at least one proposal is predicted to lead to net benefits, then it would be necessary to forecast whether the policy changes can be implemented successfully.

Guidelines

The guidelines for the SIG are the same as those that apply to the forecastingprinciples.com site as a whole. In addition, all posted peer review must include the authors' names, position, email, and any relationship that might be construed as being of potential bias.