Collecting useful data on NEPA

A 2024 study falls fall short in shedding light on the impacts of NEPA litigation

As I’ve recently posted, permitting reform is (appropriately) in the news right now.  That means there’s also a reason for various think tanks, NGOs, academics, and others to release studies that might inform the policy debate.  One such study from 2024 that has gotten some coverage on social media recently is a report by the Breakthrough Institute on NEPA litigation.  Whether, how much, and in what ways NEPA litigation shapes outcomes for federal agency decisionmaking – particular for topics like forest management to reduce fire risk – is a really important question.  But unfortunately, the study has fundamental flaws that make much of its information useless.  However, I think explaining the flaws of this study is important – hopefully it will guide future work (whether by the Breakthrough Institute or others) that can be more effective in informing our policy debates.

The core of the study – and its fatal weakness – is a survey of all federal appeals court decisions involving NEPA lawsuits from 2013 through 2022.  On a superficial level, this seems like a comprehensive survey of all of the relevant litigation.  But in fact, that is not the case – and indeed, given the fact that the report was prepared by lawyers at Holland and Knight, this problem is particularly surprising.

The issue is that appeals courts are the second stage of litigation.  NEPA cases are generally (though not always!) initially filed in federal district court.  And those cases might be resolved in district court – they may never be appealed, whether because the plaintiff or defendant wins (and no appeal is filed) or because some sort of settlement is reached at the district court level.  So if you want a full assessment of NEPA litigation, you need to look at all the cases filed in federal district courts, not the subset that are appealed.  It’s possible of course that most NEPA cases are appealed – but the report provides no data to that effect, and I’m not aware of any study that reaches that conclusion.

What are the implications of looking at a subset of cases?  First, it means that estimates of litigation rates will generally be underestimates – there is more NEPA litigation going on than the report summarizes.  Second, it also means that estimates of the delays that litigation produces will be overestimates, since cases resolved in the district court will generally take less time than cases that are appealed.

But third, and perhaps most importantly, looking at a subset of cases creates a risk that all of the subsequent analysis done in the report is biased.  For instance, it’s possible that certain types of cases are more likely to be settled in district court – such as energy cases or forest management cases.  The result is that the report’s efforts to determine whether litigation is more or less prevalent or impactful in different kinds of cases all could be off, perhaps wildly off.  We just don’t know.

There’s another key weakness in the study methodology.  At various points, it argues that NEPA litigation rates (the rates at which NEPA lawsuits are occurring) have gone up.  The problem is that determining the rate of NEPA litigation requires two numbers, a numerator (the number of lawsuits) and a denominator (the number of projects that might be challenged under NEPA).  We’ve just seen that the numerator in the report is flawed.  But it’s also the case that the report does not have information about the denominator.  The report does note the total number of environmental impact statements (EISs) that have been produced under NEPA by federal agencies over the study timeframe.  But, as the report concedes, EISs are a tiny fraction of the total NEPA compliance by federal agencies, and the report does not provide any data about the total number of projects.  So while EISs may have gone down over the study time period (as the report notes), that doesn’t necessarily mean NEPA litigation rates have increased (as the report asserts) simply because appealed cases have gone up in that timeframe as well.  If the number of federal decisions subject to NEPA have gone up in that timeframe by more than the number of lawsuits overall (not just appealed cases), then litigation rates would actually have gone down.

The final issue with the report is the most fundamental one, and the one that (unfortunately) is hardest to fix.  The report makes lots of claims about the strength of the lawsuits brought by environmental groups, their effectiveness in advancing NEPA claims, and how meaningful those lawsuits are in terms of provoking change by agencies.  It supports those claims with data on the success rate of NEPA lawsuits before circuit courts.  There are many problems with this line of argument.  First, as noted above, by looking only at appealed cases, the report may well bias success rates for litigation up or down.  Second, by excluding cases that are settled without reported opinions (whether at the trial or appellate level) the report ignores cases that might well have been successes for plaintiffs.  Third, a success rate of 80% for the government in NEPA cases needs to be compared with success rates for the government in cases overall – in general, the government does quite well in lawsuits brought against it under any statute (courts tend to defer to the Executive Branch), and so we would need to know if that 80% rate is higher or lower than other claims against the government to assess if NEPA claims are particularly weak.

But far more important is a key point in the relevant social science literature:  You can not, in general, tell anything about how much a law empowers plaintiffs or defendants, or the overall strength of plaintiff or defendant claims in the aggregate, from the relative success of plaintiffs or defendants in litigation, or the rate at which lawsuits are filed.  (The article is here, by my colleague Jonah Gelbach).  That’s because plaintiffs (and defendants!) will strategically file and settle cases based on a whole range of factors above and beyond the specific legal framework in question.

I make this point with some personal regret – I too have spent a bunch of time collecting data on litigation rates (in the context of state environmental review challenges to housing projects) – I carefully tried to address the denominator problem of making sure I had the full universe of projects, and our team reported litigation statistics in various publications and reports.  But when we reported those statistics, we generally were careful to note that they did not necessarily reflect that the legal system was too generous to plaintiffs (or not generous enough).

This final point does not mean that litigation studies are pointless.  For instance, estimates of how long litigation takes and the delays it can produce may be really valuable – so long as you actually cover the full universe of cases (unlike in this study).  And identification of what claims are being raised in lawsuits can give an indication of what changes to law might reduce (or increase) litigation.

It is unfortunate that the report is so flawed – the questions it seeks to address are important ones.  The Breakthrough Institute has indicated they plan to do additional data collection, and if they do, I hope they use methods that will provide data that is useful to policymakers.

, , ,

Reader Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

About Eric

Eric Biber is a specialist in conservation biology, land-use planning and public lands law. Biber brings technical and legal scholarship to the field of environmental law…

READ more

About Eric

Eric Biber is a specialist in conservation biology, land-use planning and public lands law. Biber brings technical and legal scholarship to the field of environmental law…

READ more

POSTS BY Eric