Monitoring problems again

 I’ve posted a lot on how important monitoring of environmental conditions is for environmental law, and how difficult it can be to do monitoring well.  Here is another recent example from the news.  After the Deepwater Horizon blowout, there was a lot of concern about how much oil was leaked into the Gulf of Mexico, and the impacts of that oil on the commercial seafood industry in the Gulf and on the marine ecosystems more broadly in the Gulf.  A recent study by a group of university professors found that the levels of oil contamination in the Gulf is much higher than originally reported by NOAA, which did much of the initial monitoring after the spill.  Why?  The academics argue that this is the result of the sampling method used by NOAA.  NOAA’s method took water samples from very narrow locations, on the assumption that the oil contamination was roughly evenly spread throughout the water column and across the Gulf.  However, the academics took samples that represented much larger locations.  They conclude that this detected much higher levels of contamination because the oil was not evenly distributed throughout the water column and the Gulf.  Instead, the heavy use of oil dispersants during and after the blowout meant that the oil was patchily distributed.  If you only sample a limited number of locations, and your locations happen to miss the patches where the oil is located – as NOAA apparently did – you will detect much lower levels of contamination than if you use a sampling method that is more representative.

Of course, this problem matters directly for the questions about how to manage the recovery of the Gulf from the oil spill, whether Gulf seafood is safe to eat, and how much compensation is owed by BP for the harm done to the Gulf.  But the bigger lesson here is that good monitoring is, once again, hard to do.  Even experts in NOAA made mistakes in how they designed their sampling protocol because they didn’t understand how the widespread use of oil dispersants would affect the distribution of oil in the Gulf waters.  And it took years for the error to be discovered by other expert scientists.  So not only is good monitoring hard to do, it is often hard for us to detect problems with monitoring.

That means, I think, that we should be wary of environmental laws and policies that depend heavily on high-quality monitoring data – for instance, adaptive management.  If we depend so heavily on monitoring, and if monitoring is unreliable (and even worse, we don’t even often know which monitoring is unreliable), our environmental laws and policies may often end up being ineffective or even counterproductive.

Reader Comments

2 Replies to “Monitoring problems again”

  1. Your concerns are valid. As to monitoring, the method and accounting methods are flawed and inadequate. Often we hear “no net loss” especially when it comes to wetlands and yet, wetlands that are rendered non-functional due to size or restriction of flow are still a loss in terms of function. Another example is a erroneous comparison of a tree farm to a forest. A forest has features – important features that an actively managed tree farm can’t have or sustain. The other big failure is an over emphasis on science. While technology has enhanced aspects of life, we don’t have the technology to really clean the environment once contaminated. The far better strategy is avoid causing problems and to use technology where it is best suited – to balance consumption with mitigation!

  2. I agree that we should be wary. As you may know, the promise of “world class” environmental monitoring is a lynchpin in the Canadian and Albertan governments’ campaign to shore up the environmental credentials of the oil sands. The first set of data was published earlier this year (on Earth Day, no less) and is available here: Only time will tell whether the monitoring program delivers or falls short, though it may be of some comfort that Alberta has committed to creating an independent monitoring agency (see

    Similarly, adaptive management (AM) is heavily relied upon by the oil sands industry where there is uncertainty about the effectiveness of proposed mitigation measures or about potential environmental effects. For a recent example, you can read the most recent environmental assessment report (on an application by Shell to expand one of its mine sites) here ( I blogged about that report, and the panel’s reliance on AM in particular, here:

Comments are closed.

About Eric

Eric Biber

Eric Biber is a specialist in conservation biology, land-use planning and public lands law. Biber brings technical and legal scholarship to the field of environmental law…

READ more