The uncertainty in climate modeling

Simulating the global ecosystem is complex, potentially involving infinite variables that describe and relate nature's chemical, physical, and biological processes. The resulting range of possible climate scenarios has led to public confusion about the validity of climate prediction--and, more urgently, to delays in appropriate action.

Round 1

Climate models produce projections, not probabilities

"I'm just a model whose intentions are good, Oh Lord, please don't let me be misunderstood," Nina Simone may as well have sung. Models are fundamentally necessary in all sciences. They exist to synthesize knowledge, to quantify the effects of counteracting forces, and most importantly, to make predictions–the evaluation of which is at the heart of the scientific method. If those predictions prove accurate, the models–and more importantly, their underlying assumptions–become more credible.

In an observational science such as the study of climate (or cosmology or evolution), we don't get to do controlled laboratory experiments. We only have one test subject, the planet Earth, and we are able to run only one uncontrolled experiment at a time. It would be unfortunate indeed if this experiment had to run its course before we could say anything about the likely results!

Climate models are amalgams of fundamental physics, approximations to well-known equations, and empirical estimates (known as parameterizations) of processes that either can't be resolved (because they happen on too small a physical scale) or that are only poorly constrained from data. Twenty or so climate groups around the world are now developing and running these models. Each group makes different assumptions about what physics to include and how to formulate their parameterizations. However, their models are all limited by similar computational constraints and have developed in the same modeling tradition. Thus while they are different, they are not independent in any strict statistical sense.

Collections of the data from the different groups, called multi-model ensembles, have some interesting properties. Most notably the average of all the models is frequently closer to the observations than any individual model. But does this mean that the average of all the model projections into the future is in fact the best projection? And does the variability in the model projections truly measure the uncertainty? These are unanswerable questions.

It may help to reframe the questions in the following way: Does agreement on a particular phenomenon across multiple models with various underlying assumptions affect your opinion on whether or not it is a robust projection? The answer, almost certainly, is yes. Such agreement implies that differences in the model inputs, including approach (e.g. a spectral or grid point model), parameterizations (e.g. different estimates for how moist convective plumes interact with their environment), and computer hardware did not materially affect the outcome, which is thus a reasonable reflection of the underlying physics.

Does such agreement "prove" that a given projection will indeed come to pass? No. There are two main reasons for that. One is related to the systematic errors that are known to exist in models. A good example is the consensus of chemistry models that projected a slow decline in stratospheric ozone levels in the 1980s, but did not predict the emergence of the Antarctic ozone hole because they all lacked the equations that describe the chemistry that occurs on the surface of ice crystals in cold polar vortex conditions–an "unknown unknown" of the time. Secondly, the assumed changes in forcings in the future may not transpire. For instance, concentrations of carbon dioxide are predominantly a function of economics, technology, and population growth, and are much harder to predict than climate more than a few years out.

Model agreements (or spreads) are therefore not equivalent to probability statements. Since we cannot hope to span the full range of possible models (including all possible parameterizations) or to assess the uncertainty of physics about which we so far have no knowledge, hope that any ensemble range can ever be used as a surrogate for a full probability density function of future climate is futile.

So how should one interpret future projections from climate models? I suggest a more heuristic approach. If models agree that something (global warming and subtropical drying for instance) is relatively robust, then it is a reasonable working hypothesis that this is a true consequence of our current understanding. If the models fail to agree (as they do in projecting the frequency of El Niño) then little confidence can be placed in their projections. Additionally, if there is good theoretical and observational backup for the robust projections, then I think it is worth acting under the assumption that they are likely to occur.

Yet demands from policy makers for scientific-looking probability distributions for regional climate changes are mounting, and while there are a number of ways to provide them, all, in my opinion, are equally unverifiable. Therefore, while it is seductive to attempt to corner our ignorance with the seeming certainty of 95-percent confidence intervals, the comfort it gives is likely to be an illusion. Climate modeling might be better seen as a Baedeker for the future, giving some insight into what might be found, rather than a precise itinerary.

Not all climate models are created equal

It's exhilarating to see the fruits of climate research achieve such prominence in the media, political debate, and concerns of industrial and municipal stakeholders. As scientists, though, it's incumbent upon us not to mislead the lay audience by blurring the line between methodological investigation and end products ready for consumption.

I should begin by disclosing that as a former project scientist at the National Center for Atmospheric Research, I was tasked with thinking about how to combine data from different climate models into probabilistic projections of regional climate change. This notwithstanding, I wholeheartedly agree with Gavin that these kinds of probabilistic projections aren't appropriate for risk analysis and decision making under uncertainty and won't be for a long time.

The ideal scenario in climate modeling is a situation where all the models are equally good: They each account for all the physical and chemical processes we deem necessary to describe Earth's climate; they all perform satisfactorily down to the finest resolved scale (currently, we have more confidence in the behavior of these models at the continental scale than at the local scale); and we have enough observed data (from climate records, ice-core samples, volcanic eruptions, and other natural phenomena) to prove that their simulations are consistent with what the real experiment (i.e., climate on Earth) is showing us. At that point, we could take the models' projections at face value, weigh the results of every model equally, and use their range to bracket our uncertainty, at least under a given emissions scenario. I'd also be out of a job. Luckily for me, we're far from that point.

As a statistician, I attempt to make sense of multiple models with a probabilistic treatment, which weighs models differently based on their different performances. Say I have data on average precipitation for the last 30 years in the Southwest United States, as well as simulations from 20 different climate models of current and future precipitation in the same region, and I want to know what the expected change in precipitation will be at the end of this century under a specific emissions scenario. I can try to account for the fact that the different models have shown different skill in simulating current precipitation. I can try to formalize the idea that model consensus is generally more reliable than individual predictions. Ideally, I can factor in the similar basic design of all the models and look at other experiments carried out with simpler models or perturbed parameterization to gauge the reliability of their agreement when compared with alternative modeling choices.

Although challenging, and, at present, only at the stage of methodological inquiry, there's value in interpreting what multiple models produce by trying to assign them probabilities. The alternative is that people will use ensemble averages and ensemble standard deviation as a measure of uncertainty. These quantities are reassuring because they can be easily explained and understood. But when I compute a mean as my estimate and a standard deviation as its uncertainty, I'm assuming that each model is producing independent data, and I'm relying on the expectation that their errors will cancel each other out.

More complicated is a probability distribution function, which characterizes the range of possible outcomes, assigning relatively higher or lower probabilities to subintervals, and may distribute the probability asymmetrically within the range. Such analysis is also more informative, but should I go as far as offering the results to the water managers in Southern California so they can think about future needs and policies for water resource management?

I wouldn't offer my probabilistic projections as a definite estimate of the area's expected temperature or precipitation; not in the same way the National Oceanic and Atmospheric Administration could, until recently, provide the public with "climate normals" based on a large sample of observed records of a supposedly stationary climate. It would be deceiving to present these estimates as having the same reliability that's needed to make structural decisions about designing a floodplain, bridge, or dam.

I think, though, that there's valuable information in a "conditional probability distribution function" if that's the best representation of the uncertainty given the data we have nowadays. The caveat is that the range of uncertainty is likely to be larger than what we see in models, which typically use conservative parameterizations. Additionally, we don't know which emissions scenario will occur over the next decades, further widening the potential realm of possible climate outcomes.

Meanwhile, in the real world, I would echo Gavin: Many decisions can be made by looking at qualitative agreement, general tendencies, and model consensus without the need for quantitative description of the uncertainties. Temperatures are warming; heat waves will intensify; sea level is rising; and arid regions will dry further. So planning for worst-case scenarios is only prudent.

Helping vulnerable populations access aid centers in the case of extreme heat events, dissuading construction on coastlines, conserving water resources, and developing drought-resistant crops are adaptation measures we should pursue regardless of the exact magnitude of the changes in store for us.

In planning for the future, make room for changing forecasts

I work at the model/reality interface and am happy to join you in discussing our relationship with our models. As in any relationship, it is important to distinguish planning what we will do with the models we actually know, from dreaming about what we could do if only we could meet a group of Claudia's "end of the rainbow" models. Seeking a perfect relationship that would bring decision-support bliss might hinder the development of our warts-and-all models, which, while improving, are in no way converging towards perfection. Our models provide our best vision of the future: Can we accept them for what they are and still develop a relationship with them that works?

Gavin notes that the average of many models is frequently "closer to the observations" than any individual model. While true, this doesn't suggest that such an average can support good decision making. Even when the model is perfect, averaging can be a bad idea.

Consider the parable of the three non-Floridian statisticians who cannot swim but desperately need to cross a river. Each has a forecast of the change in the river’s depth from one bank to the other; in each forecast, there's a different point at which the river is so deep that the statisticians will drown. According to the average of their forecasts, it’s safe to ford the river. Even though the average is closer to the actual depth profile of the river than any one of the forecasts, action based on that information is not safe–if they cross the river, they’ll drown. This suggests there is something fishy about the way we interpret "closer to the observations" to mean "best."

I am curious where the idea came from that imperfect computer models, each running on 2008-era hardware, could yield relevant probability forecasts. The desire for decision-relevant probabilities is strong, in part because they would ease the lives of decision makers and politicians. If we had objective probabilities, we could adopt a predict-optimize-relax approach to decision making: We predict the probability of each possible course of events; we optimize our goals while taking into account our tolerance to risk; we act based on our models' vision of potential futures; and then relax, believing ourselves to be "rational agents." Taking such an approach may help us make it through the night, but it's unlikely to achieve our long-term economic or environmental goals, as we are holding our models to a standard they cannot meet.

There are many textbook examples of the predict-optimize-relax approach, and subfields of mathematics dedicated entirely to it, but there are few real-world examples. Given information from past automobile accidents, auto insurance companies can accurately model losses from future accidents using only the age and gender of a large group of drivers. But there are few, if any, examples of purely model-based probability forecasts supporting good decision-making in problems like climate, where we can't learn from our mistakes. Even the successes of everyday engineering such as bridge and aircraft construction owe much to the use of safety factors, which engineers have discovered and refined over time by watching similar projects fail repeatedly. Given only one planet, this option is not available to us.

Today's climate science can support an assess-hedge-monitor approach to risk management in which we assess our vulnerability along plausible pathways through the future, hedge and reinforce weaknesses wherever viable, but also monitor and maintain the flexibility to revise policy and infrastructure as we learn more.

Climate science will be of greater relevance to policy and decision making once it dispenses with the adolescent pickup lines of "climate-proofing" and "rational action" and focuses instead on achievable aims of real value. To provide a number (even a probability) where science has only qualitative insight is to risk the credibility of science.

Can we shift the focus of climate science from "what will happen" towards "where is our vision of what will happen weakest," "how can we best improve it," and "where are we most vulnerable"? Can we clarify what information from our current models we expect will hold as our models improve, and what things we expect to change? Given the urgency of the problem, climate science will always be developing methodology in real time. How can we best clarify that fact to the consumers of climate science so that they can view advances in the science as a good thing, not as a broken promise, probabilistic or otherwise? Can we embrace the inconvenient ignorance of science as we face the inconvenient truths of climate change?

Probabilities aren’t so scary

Policy makers and planners now accept the reality of man-made climate change and are turning their thoughts toward adaptation. Should they make their decisions now, or wait to see if better climate models can provide more precise information? The modeling community strives continuously to improve their models, but experience also teaches us not to hold our breath for quantum leaps in model performance, or in our understanding of how to constrain future projections with historical information. Modeling centers have worked incredibly hard on this during the past decade, yet the range of projections hasn't changed much.

There's no obvious reason to expect more than gradual progress, and besides, some planners will not have the luxury of waiting. For example, regional flood defenses and reservoir capacity may only have been designed to address risks consistent with the current climate. Developments in major infrastructure like this have to be planned long term and decided according to time frames which may be determined by other drivers of risk (e.g. urban development), as well as climate change. So what do Claudia Tebaldi's Southern California water managers do if their planning cycle dictates that they need to make a decision now?

Presumably, they act on the best information available. Some of that information, such as future emissions and land-use change, isn't determined by climate-system processes. But much of it is, including changes in the frequency and variance of precipitation, the proportion of dry days, and changes in river flow and runoff. For that part, some collaborating set of experts–in this case climate scientists, hydrologists, and statisticians–are best placed to provide the information.

Should those experts refuse the task (citing, for example, concern about their scientific reputations) and leave the water managers to assess the risks themselves, based on their less expert assessments of climate factors? No. The relevant experts should take responsibility for their part of the decision process, ideally offering a probability distribution, which describes the relative risk of different possible outcomes.

A common criticism of probability distributions is that because they can't be verified, they can't be trusted. Indeed, the first part of this statement is true, but the second doesn't follow because we can't think about probabilistic climate predictions in the same way we think about probabilistic weather forecasts. In principle, the latter can be tested over many historical forecast cycles and adjusted to be consistent with observed frequencies of past weather behavior (the "frequentist" interpretation of statistical forecasting). Climate probabilities, however, are essentially Bayesian: They represent the relative degree of belief in a family of possible outcomes, taking into account our understanding of physics, chemistry, observational evidence, and expert judgment.

The scientific credibility of a probability distribution function therefore depends not on having near-perfect, error-free models, but on how well the methodology used to sample the uncertainties and convert the model results into probabilities summarizes the risk of alternative outcomes. This is by no means an easy hurdle, but one I believe we are capable of overcoming.

Of course, the probabilities must be accompanied by sensitivity analyses that educate users not to interpret the probabilities to too high a degree of precision. But provided this is done, they can be an effective means of support for decisions that have to be made soon.

However, I agree with Lenny Smith that room should be made for changing forecasts in the future. Climate scientists and statisticians need to work closely with planners to assess, for example, the relative costs and benefits of committing to a long-term investment immediately, compared with an alternative approach of adapting in stages, given the hope that less uncertain information will emerge in the future.

Finally, it's important to note that planners already deal with uncertainty–the wide range of future emissions scenarios being a prime example. So even if a climate scientist did (wrongly) claim to be able to quantify the risk of exceeding some critical threshold to some excessive degree of precision, the planner would still be faced with a family of different answers, corresponding to different socioeconomic assumptions. For this reason, I find far-fetched the idea that a planner is going to rush off with a climate scientist's probability distribution and make an erroneous decision because they assumed they could trust some percentile of the distribution to its second decimal place.

This doesn't mean that we can relax. But given the importance of the problem, perhaps we can be less frightened of telling users what we believe–and perhaps we can credit them with the intelligence not to interpret our current beliefs as everlasting promises. The issues require careful communication, but hey, we can make space in our diaries for that.

Round 2

Climate prediction works well for some variables and poorly for others

When deciding what type of information to give users, climate scientists need to be discerning. For example, in the forthcoming U.K. Climate Impacts Program (UKCIP08) scenarios that Lenny refers to (and in which I'm involved), we plan to supply information on changes in a number of climate variables, at a range of space and time scales, for different periods during the twenty-first century. But what level of information can be given? Can we supply formal probability distributions for all variables that users request? Or are the results too dependent on debatable expert assumptions for some variables, hence precluding predictions that are more than a "best guess" and an uncertainty range?

In practice, the answer is likely to differ from variable to variable, possibly even from season to season. Probability distributions for average future changes in surface temperature and precipitation, for instance, may be within reach, because the main processes expected to drive such changes are captured in the current generation of climate models. On the other hand, changes in, say, surface evaporation or wind speed are too dependent on basic physical assumptions, which vary between different models–the problem being that we currently lack the detailed observations and understanding needed to distinguish which of the different assumptions is correct. In the latter case, we might decline the challenge of probabilistic prediction. The important thing is to have quantitative methods of determining where to draw the line, rather than relying purely on expert judgement.

In UKCIP08, for example, we are handling this problem by combining results from two different types of ensemble data: One is a systematic sampling of the uncertainties in a single model, obtained by changing uncertain parameters that control the climate system; the other is a multi-model ensemble obtained by pooling results from alternative models developed at different international centers. We test the credibility of our results by several methods, including using members of the first ensemble to "predict" the results of the second. For some climate variables this works reasonably well, implying that probabilities can reasonably be estimated. For other variables we find much wider divergence between the two types of ensemble data, implying that probabilistic predictions aren't yet feasible.

The spatial scale of the predictions is often cited as a key factor in determining the credibility of climate predictions. But I don't think it's that simple. While uncertainty does increase at finer scales (e.g., 25-kilometer grid squares compared to 1,000-kilometer grid squares), the fact remains that much of the total uncertainty in regional changes actually arises from the larger scales. We know this from "downscaling" studies made by using high-resolution regional climate models to add local detail to larger-scale changes derived from coarser-resolution global models. The uncertainty in the downscaled detail isn't trivial, but tends to be smaller than the uncertainty attached to the global model predictions. It might well be that a probability distribution of wind speed at a relatively large scale is actually less credible than a probability distribution of temperature at some much finer scale. We can only judge credibility by assessing whether climate models capture the relevant processes on a case-by-case basis.

UKCIP08 will provide a statistical "weather generator," which will allow users to see what daily (or even hourly) sequences of weather could look like at specified locations, given changes in basic aspects of climate such as average temperature, frequency of dry days, and average precipitation on wet days. In order to provide such time series, the weather generator assumes that local climate variability remains unchanged in the future, largely because climate models aren't yet up to the job of providing reliable predictions of changes in variability. It follows that such time series should be regarded as possible realizations of how future weather might look, not formal predictions of how we expect future weather to look.

This doesn't mean that the results aren't useful: Some users will need to go with the best information they can get now. But it does mean that the potential benefits of waiting for better information from future generations of models needs to be made very clear. Here, the contextual knowledge highlighted by Gavin, the collaboration advocated by Claudia, and the awareness of model limitations emphasized by Lenny all have an important role to play. The gap between what users want and what modellers can provide will depend strongly on what type of information is being asked for. Increased engagement between the two groups is needed to ensure that the data available is neither underestimated, nor oversold.

Interpreting climate predictions should be collaborative

We all seem to agree that our state-of-the-art models aren't satisfactory representations of climate on Earth–at least not to the degree required to make decisions with them. We also agree that people are concerned with climate change and eager to incorporate information about future changes in their decision making, and we're conscious of the need to relate our research agenda and findings to real-world demands. Finally, there's consensus that we cannot look at climate forecasts–in particular, probabilistic forecasts–the same way we view weather predictions, and none of us would sell climate-model output, either at face value or after statistical analysis, as a reliable representation of the complete range of possible futures.

Beyond this common ground, we fall on different points of the spectrum between James's pragmatic approach, where he proposes giving decision makers information as our "best guess" about future outcomes nonetheless, and Lenny's highly skeptical position–namely, there's no hope in approximating the real world in any useful sense. (Interestingly, Lenny turns the issue on its head and proposes we work at characterizing what we cannot say rather than what we can.) Gavin and I are somewhere in-between. Gavin still finds qualitative value in a reasoned interpretation of model output, while I claim further that there's still value in quantifying uncertainty if the results aren't distributed for public consumption.

The reader who doesn't dabble in climate modeling or statistics is probably asking herself, "What am I to make of all this?" To which I would say, "That's exactly what I want you to think!"

Let me explain: If I can say anything for sure, it's that I don't want anyone to take a precooked climate projection–whether a single model or a multi-model ensemble, probabilistic or not–and run with it. Any decision will be best served by looking at the available observational and modeled information and listening to the opinion of climate modelers and climatologists. The experts will be able to form an integrated evaluation based on changes already observed, the processes known to influence the regional climate of interest, and projections from those models that have demonstrated accuracy in describing that region's climate–all to a degree consistent with the kind of projection required. (For example, if we're interested in changes in large average quantities, we may be willing to set the bar lower for our models than if we're interested in changes in extremes. If we're looking at a flat, large region in the middle of a continent, we may have better luck than if we're looking at a coastal region with complex topography.)

After careful synthesis of what's available to assess specific regional climate change, we may go as far as presenting a probability distribution based on this information–if we think the statistical assumptions are supported by the data. Why not? But in all of this, there's no substitute for clear, two-way communication between suppliers and users of the information–both to guide and qualify.

Meanwhile, in the convenient isolation of our research centers, I hope we pursue the obvious–better models and ways to represent the data we gather from them in a statistical framework–while also designing experiments with our models that serve the purposes Lenny suggests. Rather than pushing exclusively for ever-more complex models with ever-higher resolutions, we should think of ways to explore model errors, dependencies, and sensitivities.

I'd even propose a totally selfless design that takes the point of view of a scientist 20 year from now who, endowed with 20 years of observational records, looks back and says, "I wish those 2008 simulations had tried to do this and that; I could assess them now and use the validation to learn what that modeled process is really worth." By doing so, we may get closer to a full characterization of the uncertainties that we know exist.

As for the unknown unknowns. . . There's no way around those. But isn't that an inescapable characteristic of our ever-evolving scientific enterprise–not to mention most significant real-life decisions?

Tacit knowledge gets lost in translation with climate modeling

Reading this discussion, it's safe to say that any policy maker would get a pretty similar idea of what climate models can tell them, no matter which of us he talked to. I have a few quibbles that might be good for an after-work discussion at the pub, but they are small grievances in the greater scheme of things.

If we are all pretty much agreed about the conditional utility and limitations of climate models, why do media still interpret climate model outputs as exact predictions? Why are large-scale projections discussed as though they were local forecasts? Or worse, why the (occasional) blind dismissal of anything bearing the taint of "modeling"? This problem is significantly more complex than a Bayesian analysis of the 22 contributing models to the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report.

I would argue that this problem is not fundamental to climate models, but is a symptom of something more general: how scientific information gets propagated beyond the academy. What we have discussed here can be broadly described as tacit knowledge–the everyday background assumptions that most practicing climate modelers share but that rarely gets written down. It doesn't get into the technical literature because it's often assumed that readers know it already. It's not included in popular science summaries because it's too technical. It gets discussed over coffee, or in the lab, or in seminars, but that is a very limited audience. Unless policy makers or journalists specifically ask climate modelers about it, it's the kind of information that can easily slip through the cracks.

Shorn of this context, model results have an aura of exactitude that can be misleading. Reporting those results without the appropriate caveats can then provoke a backlash from those who know better, lending the whole field an aura of unreliability.

So, what should be done? Exercises like this discussion are useful, and should be referenced in the future. But there's really no substitute for engaging more directly with the people that need to know. This may not be as much fun as debugging a new routine (OK, that isn't much fun either), and it takes different skills than those usually found in a modeling center, i.e. clearly communicating at the appropriate technical level and with an appreciation of listeners' needs. Not all modelers need to have these skills, but we do need enough spokespersons and envoys to properly represent all of us.

Distribution of modeling information is expanding dramatically–especially through efforts by the IPCC and the freely available online archives of model outputs. (Read more about the work being done on these archives.) The number of users and interested parties are therefore increasing all of the time. Consequently, modeling centers are spending greater amounts of time both packaging their output and explaining it.

In the same way that the public has become more adept at dealing with probabilistic weather forecasts, it will get more used to dealing with climate model outputs. For the time being, however, an increased amount of hand-holding will be necessary.

Climate modeling is still an abstraction of reality

While I agree strongly with Gavin on the need to improve the communication of science, I doubt we can blame the media for many problems climate science seems set to bring upon itself. My understanding is that this November, the U.K. Climate Impacts Program (UKCIP), supported by the Met Office, will offer hourly information on a 5-kilometer grid across Britain, suggesting we have decision-relevant probabilistic weather information stretching out to 2060 and beyond "which will be of use to any user who is interested in daily weather variables, thresholds and sequences, or extreme events." (See "What is UKCIP08?") Commercial entities aim to sell even more detailed information to the energy sector and others exposed to climate risk. Insofar as many decisions must be made now, the pivotal question is whether today's models are able to provide additional relevant information, and if so, on what scales?

Let me offer an analogy: Suppose a cup of coffee slips from my hand. I know enough physics to predict that it will fall to the floor. I'm also confident that if I reduce the distance it falls, it's less likely to shatter and create a mess. We know with this kind of certainty that the greenhouse effect has initiated global warming. Yet, I also understand that I don't know enough physics to predict how many shards will result if the cup shatters or where exactly the coffee will splatter. Nor can I usefully estimate a "critical height" below which the cup has an acceptably small chance of shattering. The fact that I know I don't know these things is valuable information for evidence-based decision support: It keeps me from making overconfident plans based on the "best available information." Instead, I act based upon what I do know–the cup is likely to shatter, and therefore, I plan to jump left.

But what if the physicist next to me offers to sell me a probability distribution from his state-of-the-art "falling fluid container" model? His implied probabilities describe whether or not my cup will shatter, what the shards will do if it does, and the trajectory of various drops of coffee after the cup lands (so we know which direction to move to avoid coffee splatter). High-resolution MPEG movies of the model are available for a small extra charge, and they look great!

Should I pay attention to him? Or better, how do I decide whether or not to pay attention to him? Is it "highly skeptical" for me to ask for scientific evidence that his model can inform the decision I have to make? I do ask, and as it turns out, he has simulated the behavior of falling ping-pong balls (where the model works rather well) and falling water balloons (where the model fails to reproduce observed motion even if the balloon doesn't break). Watching the MPEG movies, I realize that his model is unable to reproduce observed phenomena relevant to my decision: The model cup always breaks into three model shards, and the model splash pattern looks unlike any of the many splash patterns I have observed in the past. The point, of course, is that the quantitative probabilities implied by his model add no value for determining which way to move to avoid splashing coffee.

So, where do we draw the corresponding line for year 2008 climate models? And which side of that line does high-resolution multivariate weather information on extreme events of interest to the British energy industry fall?

The latest report from the Intergovernmental Panel on Climate Change notes shortcomings that call into question the decision-support relevance of climate-model output on the space and timescales advertised by UKCIP, and calculating implied probability distribution functions doesn't solve issues of model inadequacy: We often see that there are relevant phenomena our models just can't simulate, and thus, we know that probabilities implied by our models are irrelevant without knowing how to fix them.

This isn't a question of being wrong in the second decimal place–implied probabilities from inadequate models can fundamentally mislead. I applaud the Met Office's groundbreaking attempts to develop new methodology for casting model output as decision-relevant; I also ask for scientific evidence that today's model output is realistic enough to be thought relevant in each particular application.

Do we believe that today's models can provide decision-relevant probabilities at a resolution of tens of square kilometers for the year 2060–or even 2020 for that matter? No. But that does not suggest we believe there is no value in climate modeling. Since the climate is changing, we can no longer comfortably base our decisions on past observations. Therefore, we must incorporate insights from our models as the best guide for the future. But to accept a naive realist interpretation of model behaviors cast as a Bayesian probability distribution is, as mathematician and philosopher Alfred Whitehead surmised, to mistake an abstract concept for concrete reality.

Until we can establish a reasonable level of internal consistency and empirical adequacy, declining to interpret model-based probabilities as decision-relevant probabilities isn't high skepticism, but scientific common sense.

Round 3

Sorry, we couldn't find any posts. Please try a different search.



 

Share: [addthis tool="addthis_inline_share_toolbox_w1sw"]

RELATED POSTS

Receive Email
Updates