Climate models produce projections, not probabilities

By Gavin Schmidt, November 26, 2007

"I'm just a model whose intentions are good, Oh Lord, please don't let me be misunderstood," Nina Simone may as well have sung. Models are fundamentally necessary in all sciences. They exist to synthesize knowledge, to quantify the effects of counteracting forces, and most importantly, to make predictions–the evaluation of which is at the heart of the scientific method. If those predictions prove accurate, the models–and more importantly, their underlying assumptions–become more credible.

In an observational science such as the study of climate (or cosmology or evolution), we don't get to do controlled laboratory experiments. We only have one test subject, the planet Earth, and we are able to run only one uncontrolled experiment at a time. It would be unfortunate indeed if this experiment had to run its course before we could say anything about the likely results!

Climate models are amalgams of fundamental physics, approximations to well-known equations, and empirical estimates (known as parameterizations) of processes that either can't be resolved (because they happen on too small a physical scale) or that are only poorly constrained from data. Twenty or so climate groups around the world are now developing and running these models. Each group makes different assumptions about what physics to include and how to formulate their parameterizations. However, their models are all limited by similar computational constraints and have developed in the same modeling tradition. Thus while they are different, they are not independent in any strict statistical sense.

Collections of the data from the different groups, called multi-model ensembles, have some interesting properties. Most notably the average of all the models is frequently closer to the observations than any individual model. But does this mean that the average of all the model projections into the future is in fact the best projection? And does the variability in the model projections truly measure the uncertainty? These are unanswerable questions.

It may help to reframe the questions in the following way: Does agreement on a particular phenomenon across multiple models with various underlying assumptions affect your opinion on whether or not it is a robust projection? The answer, almost certainly, is yes. Such agreement implies that differences in the model inputs, including approach (e.g. a spectral or grid point model), parameterizations (e.g. different estimates for how moist convective plumes interact with their environment), and computer hardware did not materially affect the outcome, which is thus a reasonable reflection of the underlying physics.

Does such agreement "prove" that a given projection will indeed come to pass? No. There are two main reasons for that. One is related to the systematic errors that are known to exist in models. A good example is the consensus of chemistry models that projected a slow decline in stratospheric ozone levels in the 1980s, but did not predict the emergence of the Antarctic ozone hole because they all lacked the equations that describe the chemistry that occurs on the surface of ice crystals in cold polar vortex conditions–an "unknown unknown" of the time. Secondly, the assumed changes in forcings in the future may not transpire. For instance, concentrations of carbon dioxide are predominantly a function of economics, technology, and population growth, and are much harder to predict than climate more than a few years out.

Model agreements (or spreads) are therefore not equivalent to probability statements. Since we cannot hope to span the full range of possible models (including all possible parameterizations) or to assess the uncertainty of physics about which we so far have no knowledge, hope that any ensemble range can ever be used as a surrogate for a full probability density function of future climate is futile.

So how should one interpret future projections from climate models? I suggest a more heuristic approach. If models agree that something (global warming and subtropical drying for instance) is relatively robust, then it is a reasonable working hypothesis that this is a true consequence of our current understanding. If the models fail to agree (as they do in projecting the frequency of El Niño) then little confidence can be placed in their projections. Additionally, if there is good theoretical and observational backup for the robust projections, then I think it is worth acting under the assumption that they are likely to occur.

Yet demands from policy makers for scientific-looking probability distributions for regional climate changes are mounting, and while there are a number of ways to provide them, all, in my opinion, are equally unverifiable. Therefore, while it is seductive to attempt to corner our ignorance with the seeming certainty of 95-percent confidence intervals, the comfort it gives is likely to be an illusion. Climate modeling might be better seen as a Baedeker for the future, giving some insight into what might be found, rather than a precise itinerary.



 

Share: [addthis tool="addthis_inline_share_toolbox"]