Conflicting data need not make environmental controversies worse

Anyone interested in the resolution of environmental controversies featuring conflicting or incomplete scientific accounts (and what interesting environmental conflict doesn’t fit in that category?) should read this article by Biggs et al. in the January issue of BioScience (subscription required). As the authors explain, the fact that two scientific studies produce conflicting results or lead to differing conclusions does not mean that one must be wrong or fraudulent while the other is right and reliable. Measurement errors and environmental variability can mean that different studies may produce very different outcomes, even if all are carried out and interpreted according to prevailing norms of scientific practice. Although sometimes those errors and variability can be quantitatively estimated, in others they may be difficult even to detect and impossible to quantify.

Conflicting evidence frequently polarizes environmental disputes, as those with a stake in the outcome become unquestioning advocates of their preferred studies, and scientists find themselves under excruciating pressure and scrutiny. Careful students of these conflicts have long realized that science need not be “junk” just because it produces uncertain or conflicting observations, but Biggs et al. move that conversation forward by supplying a coherent scientific explanation for how “sound” science can lead to conflicting results.

They go on to recommend a highly technical solution: pooling data from different studies through a “hierarchical Bayesian” analysis. They describe how their analytic framework can highlight measurement errors and heterogeneity that are not apparent from individual studies. The authors note that the benefits of pooling data are not limited to situations of obvious conflict. Even when multiple individual studies point to the same conclusion, that conclusion can be wrong if the studies have the same degree of measurement error. Pooling studies can potentially reveal hidden errors of that sort.

The details of the authors’ analytic approach are less important for those interested in environmental policy than the core insight that combining information from disparate, even seemingly irreconcilable, studies is a more productive approach than setting up an adversarial “death match” that seeks to crown a single research champion among competing studies. This is exactly what Tom McGarity, among others, has been telling us for years. It boils down to common sense: we are more likely to advance understanding if, instead of looking for what information to throw out of the analysis in order to reach a resolution, we carefully consider all the available information.

This study also may point the way to more productive use of science in some conflicts. If the stakeholders are genuinely interested in improving knowledge (a big if), pooling data can offer a way forward. Instead of promoting a battle of experts, stakeholders might start from the premise that everyone’s expert knowledge is flawed, but no one’s is “junk.” They might support a joint effort to pool their separate data for analysis, especially if they conclude that the analytic approaches to be used are (at least relatively) independent of the analyst’s policy preferences. Even if all stakeholders don’t endorse a pooling approach, an agency charged with using the best available scientific information should try pooling to see if it reveals otherwise hidden flaws, or even hidden consistencies, in the data

, , , ,

Reader Comments

About Holly

Holly Doremus is the James H. House and Hiram H. Hurd Professor of Environmental Regulation at UC Berkeley. Doremus brings a strong background in life sciences and a comm…

READ more

About Holly

Holly Doremus is the James H. House and Hiram H. Hurd Professor of Environmental Regulation at UC Berkeley. Doremus brings a strong background in life sciences and a comm…

READ more

POSTS BY Holly