The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Climate modeling is still an abstraction of reality

By Leonard A. Smith, March 10, 2008

While I agree strongly with Gavin on the need to improve the communication of science, I doubt we can blame the media for many problems climate science seems set to bring upon itself. My understanding is that this November, the U.K. Climate Impacts Program (UKCIP), supported by the Met Office, will offer hourly information on a 5-kilometer grid across Britain, suggesting we have decision-relevant probabilistic weather information stretching out to 2060 and beyond "which will be of use to any user who is interested in daily weather variables, thresholds and sequences, or extreme events." (See "What is UKCIP08?") Commercial entities aim to sell even more detailed information to the energy sector and others exposed to climate risk. Insofar as many decisions must be made now, the pivotal question is whether today's models are able to provide additional relevant information, and if so, on what scales?

Let me offer an analogy: Suppose a cup of coffee slips from my hand. I know enough physics to predict that it will fall to the floor. I'm also confident that if I reduce the distance it falls, it's less likely to shatter and create a mess. We know with this kind of certainty that the greenhouse effect has initiated global warming. Yet, I also understand that I don't know enough physics to predict how many shards will result if the cup shatters or where exactly the coffee will splatter. Nor can I usefully estimate a "critical height" below which the cup has an acceptably small chance of shattering. The fact that I know I don't know these things is valuable information for evidence-based decision support: It keeps me from making overconfident plans based on the "best available information." Instead, I act based upon what I do know–the cup is likely to shatter, and therefore, I plan to jump left.

But what if the physicist next to me offers to sell me a probability distribution from his state-of-the-art "falling fluid container" model? His implied probabilities describe whether or not my cup will shatter, what the shards will do if it does, and the trajectory of various drops of coffee after the cup lands (so we know which direction to move to avoid coffee splatter). High-resolution MPEG movies of the model are available for a small extra charge, and they look great!

Should I pay attention to him? Or better, how do I decide whether or not to pay attention to him? Is it "highly skeptical" for me to ask for scientific evidence that his model can inform the decision I have to make? I do ask, and as it turns out, he has simulated the behavior of falling ping-pong balls (where the model works rather well) and falling water balloons (where the model fails to reproduce observed motion even if the balloon doesn't break). Watching the MPEG movies, I realize that his model is unable to reproduce observed phenomena relevant to my decision: The model cup always breaks into three model shards, and the model splash pattern looks unlike any of the many splash patterns I have observed in the past. The point, of course, is that the quantitative probabilities implied by his model add no value for determining which way to move to avoid splashing coffee.

So, where do we draw the corresponding line for year 2008 climate models? And which side of that line does high-resolution multivariate weather information on extreme events of interest to the British energy industry fall?

The latest report from the Intergovernmental Panel on Climate Change notes shortcomings that call into question the decision-support relevance of climate-model output on the space and timescales advertised by UKCIP, and calculating implied probability distribution functions doesn't solve issues of model inadequacy: We often see that there are relevant phenomena our models just can't simulate, and thus, we know that probabilities implied by our models are irrelevant without knowing how to fix them.

This isn't a question of being wrong in the second decimal place–implied probabilities from inadequate models can fundamentally mislead. I applaud the Met Office's groundbreaking attempts to develop new methodology for casting model output as decision-relevant; I also ask for scientific evidence that today's model output is realistic enough to be thought relevant in each particular application.

Do we believe that today's models can provide decision-relevant probabilities at a resolution of tens of square kilometers for the year 2060–or even 2020 for that matter? No. But that does not suggest we believe there is no value in climate modeling. Since the climate is changing, we can no longer comfortably base our decisions on past observations. Therefore, we must incorporate insights from our models as the best guide for the future. But to accept a naive realist interpretation of model behaviors cast as a Bayesian probability distribution is, as mathematician and philosopher Alfred Whitehead surmised, to mistake an abstract concept for concrete reality.

Until we can establish a reasonable level of internal consistency and empirical adequacy, declining to interpret model-based probabilities as decision-relevant probabilities isn't high skepticism, but scientific common sense.



 

Share: [addthis tool="addthis_inline_share_toolbox"]