Policy makers and planners now accept the reality of man-made climate change and are turning their thoughts toward adaptation. Should they make their decisions now, or wait to see if better climate models can provide more precise information? The modeling community strives continuously to improve their models, but experience also teaches us not to hold our breath for quantum leaps in model performance, or in our understanding of how to constrain future projections with historical information. Modeling centers have worked incredibly hard on this during the past decade, yet the range of projections hasn't changed much.
There's no obvious reason to expect more than gradual progress, and besides, some planners will not have the luxury of waiting. For example, regional flood defenses and reservoir capacity may only have been designed to address risks consistent with the current climate. Developments in major infrastructure like this have to be planned long term and decided according to time frames which may be determined by other drivers of risk (e.g. urban development), as well as climate change. So what do Claudia Tebaldi's Southern California water managers do if their planning cycle dictates that they need to make a decision now?
Presumably, they act on the best information available. Some of that information, such as future emissions and land-use change, isn't determined by climate-system processes. But much of it is, including changes in the frequency and variance of precipitation, the proportion of dry days, and changes in river flow and runoff. For that part, some collaborating set of experts–in this case climate scientists, hydrologists, and statisticians–are best placed to provide the information.
Should those experts refuse the task (citing, for example, concern about their scientific reputations) and leave the water managers to assess the risks themselves, based on their less expert assessments of climate factors? No. The relevant experts should take responsibility for their part of the decision process, ideally offering a probability distribution, which describes the relative risk of different possible outcomes.
A common criticism of probability distributions is that because they can't be verified, they can't be trusted. Indeed, the first part of this statement is true, but the second doesn't follow because we can't think about probabilistic climate predictions in the same way we think about probabilistic weather forecasts. In principle, the latter can be tested over many historical forecast cycles and adjusted to be consistent with observed frequencies of past weather behavior (the "frequentist" interpretation of statistical forecasting). Climate probabilities, however, are essentially Bayesian: They represent the relative degree of belief in a family of possible outcomes, taking into account our understanding of physics, chemistry, observational evidence, and expert judgment.
The scientific credibility of a probability distribution function therefore depends not on having near-perfect, error-free models, but on how well the methodology used to sample the uncertainties and convert the model results into probabilities summarizes the risk of alternative outcomes. This is by no means an easy hurdle, but one I believe we are capable of overcoming.
Of course, the probabilities must be accompanied by sensitivity analyses that educate users not to interpret the probabilities to too high a degree of precision. But provided this is done, they can be an effective means of support for decisions that have to be made soon.
However, I agree with Lenny Smith that room should be made for changing forecasts in the future. Climate scientists and statisticians need to work closely with planners to assess, for example, the relative costs and benefits of committing to a long-term investment immediately, compared with an alternative approach of adapting in stages, given the hope that less uncertain information will emerge in the future.
Finally, it's important to note that planners already deal with uncertainty–the wide range of future emissions scenarios being a prime example. So even if a climate scientist did (wrongly) claim to be able to quantify the risk of exceeding some critical threshold to some excessive degree of precision, the planner would still be faced with a family of different answers, corresponding to different socioeconomic assumptions. For this reason, I find far-fetched the idea that a planner is going to rush off with a climate scientist's probability distribution and make an erroneous decision because they assumed they could trust some percentile of the distribution to its second decimal place.
This doesn't mean that we can relax. But given the importance of the problem, perhaps we can be less frightened of telling users what we believe–and perhaps we can credit them with the intelligence not to interpret our current beliefs as everlasting promises. The issues require careful communication, but hey, we can make space in our diaries for that.
Share: [addthis tool="addthis_inline_share_toolbox"]