Interpreting climate predictions should be collaborative

By Claudia Tebaldi, February 5, 2008

We all seem to agree that our state-of-the-art models aren't satisfactory representations of climate on Earth–at least not to the degree required to make decisions with them. We also agree that people are concerned with climate change and eager to incorporate information about future changes in their decision making, and we're conscious of the need to relate our research agenda and findings to real-world demands. Finally, there's consensus that we cannot look at climate forecasts–in particular, probabilistic forecasts–the same way we view weather predictions, and none of us would sell climate-model output, either at face value or after statistical analysis, as a reliable representation of the complete range of possible futures.

Beyond this common ground, we fall on different points of the spectrum between James's pragmatic approach, where he proposes giving decision makers information as our "best guess" about future outcomes nonetheless, and Lenny's highly skeptical position–namely, there's no hope in approximating the real world in any useful sense. (Interestingly, Lenny turns the issue on its head and proposes we work at characterizing what we cannot say rather than what we can.) Gavin and I are somewhere in-between. Gavin still finds qualitative value in a reasoned interpretation of model output, while I claim further that there's still value in quantifying uncertainty if the results aren't distributed for public consumption.

The reader who doesn't dabble in climate modeling or statistics is probably asking herself, "What am I to make of all this?" To which I would say, "That's exactly what I want you to think!"

Let me explain: If I can say anything for sure, it's that I don't want anyone to take a precooked climate projection–whether a single model or a multi-model ensemble, probabilistic or not–and run with it. Any decision will be best served by looking at the available observational and modeled information and listening to the opinion of climate modelers and climatologists. The experts will be able to form an integrated evaluation based on changes already observed, the processes known to influence the regional climate of interest, and projections from those models that have demonstrated accuracy in describing that region's climate–all to a degree consistent with the kind of projection required. (For example, if we're interested in changes in large average quantities, we may be willing to set the bar lower for our models than if we're interested in changes in extremes. If we're looking at a flat, large region in the middle of a continent, we may have better luck than if we're looking at a coastal region with complex topography.)

After careful synthesis of what's available to assess specific regional climate change, we may go as far as presenting a probability distribution based on this information–if we think the statistical assumptions are supported by the data. Why not? But in all of this, there's no substitute for clear, two-way communication between suppliers and users of the information–both to guide and qualify.

Meanwhile, in the convenient isolation of our research centers, I hope we pursue the obvious–better models and ways to represent the data we gather from them in a statistical framework–while also designing experiments with our models that serve the purposes Lenny suggests. Rather than pushing exclusively for ever-more complex models with ever-higher resolutions, we should think of ways to explore model errors, dependencies, and sensitivities.

I'd even propose a totally selfless design that takes the point of view of a scientist 20 year from now who, endowed with 20 years of observational records, looks back and says, "I wish those 2008 simulations had tried to do this and that; I could assess them now and use the validation to learn what that modeled process is really worth." By doing so, we may get closer to a full characterization of the uncertainties that we know exist.

As for the unknown unknowns. . . There's no way around those. But isn't that an inescapable characteristic of our ever-evolving scientific enterprise–not to mention most significant real-life decisions?



 

Share: [addthis tool="addthis_inline_share_toolbox"]