Another Attempt to Measure NEPA’s Impact

This most recent report is better, but still has significant flaws

The Breakthrough Institute has produced another report on litigation under the National Environmental Policy Act (NEPA), building on a report it prepared earlier, which I sharply criticized in this prior blog post.  The updated report is a mixed bag: It doesn’t solve many of the methodological issues I identified in the earlier blogpost; it does do more to forthrightly acknowledge some of those limitations; but then it proceeds to make sweeping statements in its conclusion and policy recommendations that ignores those methodological limitations.

First, like the earlier study, this study purports to provide a comprehensive analysis of the timeframes, success rates, plaintiffs, and kinds of projects challenged under NEPA.  I say “purports” because, like the prior study, it is limited in its scope of what cases it covers, a limitation that significantly undermines the report’s effectiveness.  The prior report only covered decisions by the federal courts of appeals – this report includes all district court decisions that are reported in Westlaw (a legal database).  This is better, since not all cases are appealed.  But, as the report itself concedes, many lawsuits do not produce decisions that are reported in Westlaw.  So again, the universe of cases covered in the report is underinclusive.  The report does recognize this limitation in its discussion of the methodology.

Compounding that limitation is a problem from the prior report that this report replicates:  It only looks at litigated projects.  But to truly understand how NEPA operates and the impacts of litigation, we would want to assess the universe of projects that might be litigated, and how many of those projects are litigated.  The report does not do that – in part, because the data is difficult to capture for at least some types of projects.  The report again concedes this limitation in its discussion at times, for instance, noting that it cannot assess whether clean-energy projects are more likely to be litigated than fossil fuel projects.

These data weaknesses matter, as I pointed out in my earlier blog post.  Underinclusive coverage of litigation will skew what kinds of cases are included in the analysis – for instance, it’s plausible that the cases that settle, or that do not produce decisions reported to Westlaw, are resolved more quickly than other cases.  Thus, the report’s estimates of timeframes for litigation may well be inflated.  Likewise, certain kinds of plaintiffs, and certain kinds of cases, might be overrepresented in the dataset that Breakthrough has complied, skewing estimates of which plaintiffs are more frequent, or what kinds of projects are challenged.  And a lack of an estimate as to the denominator of the number of eligible projects that could be challenged means the report is very limited as to what it can say about the rate of litigation overall.

Finally, the report makes a big deal about the success rate of environmental plaintiffs in litigation, which it calculates as 26%.  First, this number may well be skewed by the data weaknesses discussed above.  But there is an even more fundamental problem with looking at success rates, as I pointed out in the prior blog post, and which the report does not address at all – as the legal scholarship makes clear, it is very difficult to draw conclusions about the merits of underlying cases, or whether a particular law is plaintiff- or defendant-friendly, from simple success rates in litigation.  In general, one needs substantially more information (such as the cost of litigation) which is not available in the report.

Again, some of these points are acknowledged in a section on methodological limitations in the report.  But those acknowledgments appear to go out the window in the conclusion and the policy recommendations section, where the authors appear to make sweeping statements that blow past the methodological limitations of the report (whether acknowledged by the authors or not).  For instance, the report states that:

To meaningfully reform NEPA, policymakers must begin by confronting a fundamental truth: the litigation landscape continues to be driven by incentives that reward legal friction over material environmental stewardship. Most lawsuits brought under NEPA fail—not just in court, but in producing any substantive change to the projects they target. What was intended as a procedural safeguard for environmental protection has, in practice, devolved into a tool for obstruction, wielded to stall development through uncertainty and delay rather than to improve outcomes.

At another point, the report calls for reforms to “impose … norms to discourage the filing and extensive litigation of frivolous suits,” with the implication that the low success rate of NEPA plaintiffs means that most lawsuits are frivolous.

Again, note the problem.  Whether most lawsuits fail or not does not tell us much at all about the merits of the underlying lawsuits, or in terms of whether they provide social benefit.  The same point about whether the project substantively changes as a result of NEPA litigation applies as well – the report only looks at litigated projects, but many unlitigated projects might be better (or worse!) because of the shadow or threat of NEPA litigation.

This does not mean there isn’t merit in the data collection work that Breakthrough has done.  The timeframes analysis (despite the data limitations I note above) is useful, as is the assessment of the claims raised in lawsuits, for instance.  But the data does not support the policy recommendations the authors make in their report – and unfortunately, the report reads like something where the policy recommendations were already going to be made, and the data just came along for the ride, whether it connects to those recommendations or not.

Going forward, I think the most helpful kind of data collection that could be done in this space would be to collect, for a category or categories of projects that are important, all the projects proposed, assess how many progress through the approval process, what kind of process they follow, and how many are litigated subject to NEPA.  Such an approach would at least get a sense of overall litigation rates (how many or what proportion of projects are litigated), something that is much more helpful than just counting up the projects that are litigated.  This approach is still underinclusive – it cannot assess the projects that are never proposed because of litigation or other risks.  But it would be quite informative nonetheless.  Doing this for the most important kinds of projects – for instance, transmission lines, clean-energy projects, or fire management projects – should be the priority for institutions such as Breakthrough, if their goal is to inform policy (rather than just use a data collection process to ex post justify policy recommendations they want to make anyway).

, , ,

Reader Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

About Eric

Eric Biber is a specialist in conservation biology, land-use planning and public lands law. Biber brings technical and legal scholarship to the field of environmental law…

READ more

About Eric

Eric Biber is a specialist in conservation biology, land-use planning and public lands law. Biber brings technical and legal scholarship to the field of environmental law…

READ more

POSTS BY Eric