In planning for the future, make room for changing forecasts

By Leonard A. Smith, January 14, 2008

I work at the model/reality interface and am happy to join you in discussing our relationship with our models. As in any relationship, it is important to distinguish planning what we will do with the models we actually know, from dreaming about what we could do if only we could meet a group of Claudia's "end of the rainbow" models. Seeking a perfect relationship that would bring decision-support bliss might hinder the development of our warts-and-all models, which, while improving, are in no way converging towards perfection. Our models provide our best vision of the future: Can we accept them for what they are and still develop a relationship with them that works?

Gavin notes that the average of many models is frequently "closer to the observations" than any individual model. While true, this doesn't suggest that such an average can support good decision making. Even when the model is perfect, averaging can be a bad idea.

Consider the parable of the three non-Floridian statisticians who cannot swim but desperately need to cross a river. Each has a forecast of the change in the river’s depth from one bank to the other; in each forecast, there's a different point at which the river is so deep that the statisticians will drown. According to the average of their forecasts, it’s safe to ford the river. Even though the average is closer to the actual depth profile of the river than any one of the forecasts, action based on that information is not safe–if they cross the river, they’ll drown. This suggests there is something fishy about the way we interpret "closer to the observations" to mean "best."

I am curious where the idea came from that imperfect computer models, each running on 2008-era hardware, could yield relevant probability forecasts. The desire for decision-relevant probabilities is strong, in part because they would ease the lives of decision makers and politicians. If we had objective probabilities, we could adopt a predict-optimize-relax approach to decision making: We predict the probability of each possible course of events; we optimize our goals while taking into account our tolerance to risk; we act based on our models' vision of potential futures; and then relax, believing ourselves to be "rational agents." Taking such an approach may help us make it through the night, but it's unlikely to achieve our long-term economic or environmental goals, as we are holding our models to a standard they cannot meet.

There are many textbook examples of the predict-optimize-relax approach, and subfields of mathematics dedicated entirely to it, but there are few real-world examples. Given information from past automobile accidents, auto insurance companies can accurately model losses from future accidents using only the age and gender of a large group of drivers. But there are few, if any, examples of purely model-based probability forecasts supporting good decision-making in problems like climate, where we can't learn from our mistakes. Even the successes of everyday engineering such as bridge and aircraft construction owe much to the use of safety factors, which engineers have discovered and refined over time by watching similar projects fail repeatedly. Given only one planet, this option is not available to us.

Today's climate science can support an assess-hedge-monitor approach to risk management in which we assess our vulnerability along plausible pathways through the future, hedge and reinforce weaknesses wherever viable, but also monitor and maintain the flexibility to revise policy and infrastructure as we learn more.

Climate science will be of greater relevance to policy and decision making once it dispenses with the adolescent pickup lines of "climate-proofing" and "rational action" and focuses instead on achievable aims of real value. To provide a number (even a probability) where science has only qualitative insight is to risk the credibility of science.

Can we shift the focus of climate science from "what will happen" towards "where is our vision of what will happen weakest," "how can we best improve it," and "where are we most vulnerable"? Can we clarify what information from our current models we expect will hold as our models improve, and what things we expect to change? Given the urgency of the problem, climate science will always be developing methodology in real time. How can we best clarify that fact to the consumers of climate science so that they can view advances in the science as a good thing, not as a broken promise, probabilistic or otherwise? Can we embrace the inconvenient ignorance of science as we face the inconvenient truths of climate change?



 

Share: [addthis tool="addthis_inline_share_toolbox"]