Model Uncertainty in Politics and Climate Policy

The polls could be systematically off, not just due to random error. That’s a worry with climate models as well.

The polls are predicting very tight election results. The state results could turn out to be within the margin of error, with half going one way and half the other. But there’s another plausible outcome: a sweep by one side or the other because the polls were all off a few percent in the same direction.

If you could get a large and truly random sample of the population and get them to answer all questions truthfully right before they voted, the life of a pollster would be much easier.  One problem, however, is that polling is expensive, so we get fewer national polls than we used and even fewer good polls at the state level.

But the bigger problem is that the samples aren’t random.  In the old days, the biggest problem was that not everyone had phones. Today, not everyone has landlines, so that’s one source of difficulty. But even if you’re able to call them, many people will see the caller-ID. and refuse to answer, others will pick up but refuse to answer questions, and still others won’t tell the truth. And none of that is completely random – some groups are easier to reach or more cooperative than others. This results in non-random samples, which pollster have to somehow correct.

The corrections are based on assumptions about how many people in various subgroups will vote. In short, pollster are using what amount to models of voting behavior to adjust their results. But the models could share some common error that would systematically skew the results.  And the models are inherently limited – if you think that Southern white Protestant men in rural areas with infrequent church attendance are likely to vote differently than other groups, a given poll may not have enough of them in the sample to extrapolate their responses to the group as a whole.

Scientists also worry that their climate models might have common modeling errors. They can be somewhat more confident in dealing with the issue because model assumptions are clearer and can often be tested independently of the overall model. Moreover, climate scientists have a lot more data to work with. There are detailed satellite data of many forms, and thousands of ground-level observations. And so far, the models have been closely enough in accord with actual weather developments to give us some confident that—at least in conditions relatively like today’s– the modeling errors aren’t severe enough to dramatically change predictions.

The problem is with the qualification I set off with dashes, “at least in conditions relatively like our own.”  There could be flaws in the models – such as feedback loops we don’t know about – that will only become important once the world has warmed by two or three degrees.

None of this should make you feel better, either about the election or climate change.  Yes, your favored candidate could sweep the swing states, and yes, climate change could be more moderate than we now expect.  But that shouldn’t give you much comfort on either issue, since the errors could equally be in the opposite directions.

Obviously, we’d like to improve our models, but that’s not always easy. In the meantime, the smart thing is to plan on the basis of the best models we have but avoid overconfidence about our predictions.

, , , , ,

Reader Comments

One Reply to “Model Uncertainty in Politics and Climate Policy”

  1. The Intergovernmental Panel on Climate Change (IPCC) employs a detailed framework to convey the certainty of its findings. Confidence is a qualitative measure reflecting the authors’ trust in their assessment, based on evidence and agreement levels. Uncertainty, on the other hand, is quantified using probability density functions, assigning percentages to terms like “Virtually certain” or “Very unlikely” to indicate the likelihood of a prediction: Virtually certain (99-100% probability), Very likely (90-100% probability), Likely (66-100% probability), About as likely as not (33-66% probability), Unlikely (0-33% probability), Very unlikely (0-10% probability), and Exceptionally unlikely (0-1% probability). Additional terms may also be used when appropriate: Extremely likely (95-100% probability), More likely than not (>50-100% probability), and Extremely unlikely (0-5% probability). This dual approach allows the IPCC to communicate complex scientific data in a structured and understandable way, aiding policymakers and the public in grasping the nuances of climate change research.

Leave a Reply

Your email address will not be published. Required fields are marked *

About Dan

Dan Farber has written and taught on environmental and constitutional law as well as about contracts, jurisprudence and legislation. Currently at Berkeley Law, he has al…

READ more

About Dan

Dan Farber has written and taught on environmental and constitutional law as well as about contracts, jurisprudence and legislation. Currently at Berkeley Law, he has al…

READ more

POSTS BY Dan