By James Murphy, April 22, 2008
When deciding what type of information to give users, climate scientists need to be discerning. For example, in the forthcoming U.K. Climate Impacts Program (UKCIP08) scenarios that Lenny refers to (and in which I'm involved), we plan to supply information on changes in a number of climate variables, at a range of space and time scales, for different periods during the twenty-first century. But what level of information can be given? Can we supply formal probability distributions for all variables that users request? Or are the results too dependent on debatable expert assumptions for some variables, hence precluding predictions that are more than a "best guess" and an uncertainty range?
In practice, the answer is likely to differ from variable to variable, possibly even from season to season. Probability distributions for average future changes in surface temperature and precipitation, for instance, may be within reach, because the main processes expected to drive such changes are captured in the current generation of climate models. On the other hand, changes in, say, surface evaporation or wind speed are too dependent on basic physical assumptions, which vary between different models–the problem being that we currently lack the detailed observations and understanding needed to distinguish which of the different assumptions is correct. In the latter case, we might decline the challenge of probabilistic prediction. The important thing is to have quantitative methods of determining where to draw the line, rather than relying purely on expert judgement.
In UKCIP08, for example, we are handling this problem by combining results from two different types of ensemble data: One is a systematic sampling of the uncertainties in a single model, obtained by changing uncertain parameters that control the climate system; the other is a multi-model ensemble obtained by pooling results from alternative models developed at different international centers. We test the credibility of our results by several methods, including using members of the first ensemble to "predict" the results of the second. For some climate variables this works reasonably well, implying that probabilities can reasonably be estimated. For other variables we find much wider divergence between the two types of ensemble data, implying that probabilistic predictions aren't yet feasible.
The spatial scale of the predictions is often cited as a key factor in determining the credibility of climate predictions. But I don't think it's that simple. While uncertainty does increase at finer scales (e.g., 25-kilometer grid squares compared to 1,000-kilometer grid squares), the fact remains that much of the total uncertainty in regional changes actually arises from the larger scales. We know this from "downscaling" studies made by using high-resolution regional climate models to add local detail to larger-scale changes derived from coarser-resolution global models. The uncertainty in the downscaled detail isn't trivial, but tends to be smaller than the uncertainty attached to the global model predictions. It might well be that a probability distribution of wind speed at a relatively large scale is actually less credible than a probability distribution of temperature at some much finer scale. We can only judge credibility by assessing whether climate models capture the relevant processes on a case-by-case basis.
UKCIP08 will provide a statistical "weather generator," which will allow users to see what daily (or even hourly) sequences of weather could look like at specified locations, given changes in basic aspects of climate such as average temperature, frequency of dry days, and average precipitation on wet days. In order to provide such time series, the weather generator assumes that local climate variability remains unchanged in the future, largely because climate models aren't yet up to the job of providing reliable predictions of changes in variability. It follows that such time series should be regarded as possible realizations of how future weather might look, not formal predictions of how we expect future weather to look.
This doesn't mean that the results aren't useful: Some users will need to go with the best information they can get now. But it does mean that the potential benefits of waiting for better information from future generations of models needs to be made very clear. Here, the contextual knowledge highlighted by Gavin, the collaboration advocated by Claudia, and the awareness of model limitations emphasized by Lenny all have an important role to play. The gap between what users want and what modellers can provide will depend strongly on what type of information is being asked for. Increased engagement between the two groups is needed to ensure that the data available is neither underestimated, nor oversold.