Everywhere and Forever All at Once: PFAS and the Failures of Chemicals Regulation

Environmental law helped create a world awash in toxic chemicals. It’s time to think about how regulation can operate as a form of green industrial policy for chemicals.

This post was originally published on the Law & Political Economy Blog as “How Environmental Law Created a World Awash in Toxic Chemicals.” 

Earlier this spring, the Biden administration finalized two important rules targeting a small subset of so-called forever chemicals: one establishing drinking water standards for six such chemicals and the other designating two of the more prominent ones as hazardous substances under CERCLA (the Superfund Law). These chemicals, which are part of a much larger family of some 15,000 chemicals known as Per- and Polyfluoroalkyl Substances (or PFAS), are called forever chemicals because of their extreme environmental persistence. They are now widespread in America’s drinking water, showing up in about half of the country’s tap water, and ubiquitous in the broader environment, where they bioaccumulate in living organisms. As their name suggests, these chemicals last, well, like, forever—or at least until we clean them up, which takes enormous effort, not to mention time and money.

First manufactured in the 1940s, the forever chemicals were highly valued because of their resistance to heat, oil, stains, grease, and water. For decades, the chemical industry has used them in a wide range of coatings and consumer products, including DuPont’s famous non-stick Teflon coating and 3M’s Scotchgard. Though manufacturers began to phase out two of the most widely used PFAS chemicals in the early 2000s, these chemicals are still produced for a handful of products and are still included in products imported from abroad. And, of course, there are literally thousands of other PFAS chemicals that are still being used in a wide range of products, with very little or no testing for their potential toxicity.

As a result, PFAS chemicals are now present in the blood of an estimated 97% of people in the United States. And despite years of public pronouncements from the manufacturers that these chemicals were biologically inert and posed no health risk, forever chemicals have now been linked to a growing list of human health impacts, including various forms of cancer, reproductive harms, immunotoxicity, and neurodevelopmental problems (among others). Thanks to documents produced in litigation and the work of investigative journalists, we now know that two of the largest manufacturers, Dupont and 3M, have known about potential health problems for some fifty years, notwithstanding their claims to the contrary. While these specific instances of corporate malfeasance are hardly atypical in the world of toxic harms, the current settlement dollars involving PFAS are quite large—more than $11 billion in 2023, a figure that is expected to increase in coming years.

To anyone paying attention, the message is clear: there is a huge toxic mess out there posing significant risks to public health and the environment. To his credit, EPA Administrator Michael Regan recognizes this and has prioritized the regulation of forever of chemicals across multiple EPA programs since taking over at EPA.

But as EPA moves forward on various fronts to clean up widespread contamination created by more than half a century of PFAS production, we are left with a worrying question: how did we let this happen in the first place? Indeed, the PFAS disaster points once again to the failure of our laws regulating toxic chemicals, which were intended to avoid just these sorts of problems. In effect, because PFAS chemicals were already “in commerce” at the time that the Toxic Substances Control Act (TSCA) was enacted in 1976, they (along with some 65,000 other chemicals in commerce at the time) were presumed to be safe and so were not subject to any sort of testing.

Much of the failure to regulate PFAS and other toxic chemicals is, I argue in a recent article and discuss in more detail below, symptomatic of a larger failure of environmental law over the past forty years: namely, it’s embrace of risk assessment as the dominant approach to environmental harms and its abandonment of earlier approaches based on precaution, endangerment, and a healthy respect for uncertainty.

Living in a Toxic World

The consequences of these failures are evident in the ubiquitous presence of toxic chemicals in the tissues of human beings all over the world and in the widespread contamination of terrestrial and marine environments. The rapidly increasing production and release of so-called novel entities (synthetic chemicals, pollutants, and heavy metals) are now pushing past planetary boundaries, damaging ecosystems and taking an enormous toll on human life and human health

On a global scale, the Lancet Commission on Pollution and Health estimates that pollution and toxic substances are now responsible for some nine million premature deaths a year, which is more than three times the number of deaths from AIDS, tuberculosis and malaria combined, fifteen times the number of deaths from all wars and other forms of violence, and thirty percent more than total global deaths from COVID-19. And this number is almost surely an underestimate given that we are still learning how damaging toxic substances can be. One recent study of the global health burden of lead contamination, for example, estimated that lead causes 5.5 million premature deaths per year—a six-fold increase over previous estimates. Research over the last several decades, moreover, has revealed that low-level exposures to a broad range of industrial chemicals, pesticides, and pollution are linked to various neurodevelopmental problems, immunotoxicity, endocrine system disruption, and reproductive harms (among others). Even on cancer, despite progress in reducing cancer deaths in the U.S. and other countries, the incidence of certain cancers, especially in children and young adults, continues to increase.

Exposures to pollution and toxics and the harms they cause are also radically unequal, as frontline communities and environmental justice advocates have been pointing out for decades. Most of the premature deaths associated with pollution and toxics today, for example, are in low- and middle-income countries, where the problems are getting worse, not better. In the United States, we now know based on extensive surveys of chemical biomarker concentrations that Black women and women of color have higher (often significantly higher) concentrations of various industrial chemicals, pesticides, and heavy metals in their blood. We know that people living next to Superfund sites have reduced life expectancy. And we know that despite significant progress in reducing the overall incidence of childhood lead poisoning, more than half a million children in America still have elevated levels of lead in their blood, including a disproportionately high number of Black children.

The Failures of Risk Assessment

There are many reasons for these failures. But a big part of the problem lies with the standard approach to risk assessment that has come to provide much of the foundation for our approach to understanding and regulating toxic chemicals, pollution, and hazardous waste. There is a long history here that I have investigated in a series of articles (see here, here, and here), which show how formal approaches to risk came to dominate environmental decision-making starting in the early 1980s and, in the process, displaced earlier, simpler approaches founded upon precaution, endangerment, and a healthy respect for uncertainty.

Although risk assessment has often been understood as a largely technical, scientific exercise that provides the basic facts needed for the more value-laden exercise of risk management (itself cast as an exercise in cost-benefit analysis), the history of risk assessment makes clear that it has operated first and foremost as a political technology intended to discipline agencies and constrain their ability to solve complex problems, rather than as a tool to generate useful information about the world. Indeed, from the beginning, risk assessment was pushed by industry as a way of ensuring that no regulation would proceed until we determined exactly how many workers or how many people might suffer a particular harm from a certain level of exposure. Through the advocacy of organizations such as the American Council on Industrial Health, the Chemical Manufacturers Association, and the American Petroleum Institute, industry-funded scientists and lawyers commandeered the apparatus of fact-making that provides much of the basic infrastructure for regulatory science. This was a far more expansive and successful effort than simply working to manufacture doubt and uncertainty by questioning specific studies and funding alternative research. Indeed, much of the industry perspective was embraced and promoted by the science policy establishment, EPA, and the Supreme Court—all as part of a purportedly more rational and responsible approach to reforming regulation that moved into high gear during the 1980s.

But any honest evaluation of the practice of risk assessment over the past forty years would reveal an approach that has been unable to deliver on even the most basic metrics, as evidenced by reviews from the National Academy of Sciences (which was one of the original proponents of risk assessment) and the Government Accountability Office , among many others. Indeed, major individual risk assessment exercises have taken decades to complete, with many thousands of additional chemicals waiting in the queue. The dioxin cancer risk reassessment, for example, has been ongoing for more than thirty years, producing cancer risk estimates that vary by three orders of magnitude with no agreed criteria for how to achieve closure. Similar risk assessments for trichloroethylene and formaldehyde (among others) have also taken decades, with substantial variation in risk estimates depending on the models used. And these are some of the most data-rich and well-studied chemicals out there.

Risk assessment, in short, was never really intended to generate useful information for regulators. Rather, it was directed almost from the start at disciplining agencies and replacing expert judgment with a more formal, rule-governed rationality that saddled these agencies with impossible analytical demands, which in turn created seemingly endless opportunities for contestation and delay. From that vantage, risk assessment has been wildly successful. But from a public health perspective, it has failed in every way that matters.

Toxic Ignorance and the Limits of Reform

By putting the burden on the government to demonstrate significant risk of harm before regulating, risk assessment has allowed the ongoing production of new chemicals and new forms of pollution for decades—much of which has been tied to the petrochemical industry. Indeed, one can trace direct line from the ongoing diversification strategies of oil and gas companies looking for new ways to make money from their massive reserves of fossil hydrocarbons to the exponential increase of production of plastics and other chemicals. The numbers here are staggering.

Since 1950, global chemical production has increased 50-fold and is projected to triple again by 2050 (relative to 2010), with an increasing share of production shifting to so-called emerging economies. Production of plastics has also skyrocketed, with cumulative global production expected to triple by 2050. There are now an estimated 350,000 chemicals and chemical mixtures on the global market. Only a handful have been properly tested and only for health endpoints that we know to look for.

Efforts to reform toxics regulation have typically pushed for better risk assessments, often tied to more pre-market testing of chemicals with the burden shifted to the manufacturers to demonstrate safety before they can sell their products—an approach that is more in line with that used for drugs (and to some extent pesticides). The EU, for example, has been working for almost two decades to implement a new regulatory framework along these lines, based on the simple rule that manufacturers seeking to sell chemicals on the European market must first produce a basic set of testing data for their chemicals: no data, no market. But even after twenty years of effort, basic safety information is still missing for the vast majority of chemicals sold in the EU.

In 2016, after many years of effort, the US Congress also enacted bipartisan amendments to the Toxic Substances Control Act, with the goal of fixing some of the most egregious problems with the statute. But only in the last year has EPA received any additional budget to implement the new provisions, and the agency is already behind on all of its major deadlines under the statute. More concerning is that fact that even if EPA were able to meet the risk assessment deadlines in the TSCA amendments, it would take an estimated 1500 years (!) to work through the backlog of existing chemicals.

There is something deeply irrational about allowing the ongoing production of novel chemical entities and their widespread release into the environment, especially those that are highly persistent and bioaccumulate, without any real understanding of how these chemicals will behave once they get out into the world. For decades, we have watched as various designer molecules are used in a wide range of products based on unsubstantiated claims of safety only to find that these molecules end up causing harm, either in ways that some had warned us about or in ways that were entirely unanticipated. Tetra-ethyl lead, DDT, PCBs, CFCs—all of these and many more stand as cautionary lessons that counsel against ever accepting claims of safety from those who stand to profit from their production and use. The looming crisis with forever chemicals and micro-plastics are just the latest examples.

Many of the harms associated with these chemicals are subtle and poorly understood. Some of them can take many years to develop and can even travel across generations. Most will surely be lost in the background mix of toxic insults, health setbacks, and cumulative stressors that shape our lives, albeit unevenly.  And so we continue to allow industry to fill our world with toxic chemicals. Even as we work to end the use of fossil fuels as an energy source, we cannot stand by and watch as these same companies shift their production to plastics and petrochemicals. Decarbonization without detoxification is not much of a victory.

Rather than doubling down and trying to reform risk assessment, we should go back to simple, hazard-based triggers for regulation founded on a healthy respect for uncertainty and that lean toward precaution. As new information becomes available, regulations can be adjusted. In all cases, though, we should be vigilant before we allow widespread release of persistent chemicals into the environment, and we should recognize that even though we may not know precisely how toxic substances might cause harm in the future, we do know from experience that many (even most) potentially harmful agents turn out to be more harmful than initially suspected.

Put another way, it is past time to think about how regulation, and the regulatory science that supports it, can operate as a form of green industrial policy for chemicals, rather than as a forum for seemingly interminable debates over the nature and magnitude of harms and the zero-sum logic of risk-benefit balancing. The goal in all of this should be to move fast and protect people, to use simple default rules to drive innovation toward sustainability and health, and to make clear that the actual harms inflicted on real people living real lives in real places have both a moral and a legal significance that has been largely forgotten in the formulas and balancing acts that we have allowed to colonize the practice of environmental protection.

, , ,

Reader Comments

5 Replies to “Everywhere and Forever All at Once: PFAS and the Failures of Chemicals Regulation”

  1. Thank you! Your excellent work has been very important and influential on my thinking about these challenges.

  2. Great article! Spot on about the shortcomings of the current paradigm and the need for new approaches that are hazard-based.

  3. Good law review papers, especially on historic aspects of this problem, but a regulatory history is not a full history, which leads to even worse conclusions than yours (see below). Regulators were challenged by the IBT lab scandal, thus they believe their fix–the insensitive (to find toxicity) guideline test methods of the OECD (US EPA’s are identical)–are so reliable that they don’t worry about the 3 low dose in vertebrates toxicity findings that academics publish every day: literally none of these ~25,000 (and accelerating) low dose in vivo findings (or any other supporting findings) are even evaluated for their reliability… even when laws require that all available data be evaluated; in RA done for market access (pesticides, REACh, TSCA, etc.).

    Your solution of hazard assessment alone would fail to make any difference, since that is the side of RA that is being ignored; and the 2009 EU pesticide is the only one in the world with a hard mandate to prefer the precautionary principle, yet its RAs are shown to be just as bad.

    Yet all stakeholders (even the scientists producing this data) are so innocent of phenomenal situation this that they’ve ignored it for 12 years when informed (random-selection audits of REACh registrations, EU pesticide re-authorisations, and the recent TSCA evaluations of 10 huge volume chemicals, prove this). Until people try to take this argument to regulators & their bosses, nothing can change.

    A HISTORY OF CHEMICAL RISK ASSESSMENT (all formatting lost)

    1840s – ’50s
    Organic chemistry starts to accelerate: learning the physical properties of atoms & organic molecules with their functional groups – i.e. structure-activity relation (SAR) [germanic people thrive in such detailed work, so they lead this discipline and the petrochemical industry that it spawned (e.g., only Prussia in 19th Cent. made universities go to work for that industry; so grad students slogged through thousands of molecular structures to find useful properties (SAR)]. see: Beer ’58

    1860s
    London wants a synthetic quinine to treat malaria in its vast empire, UK chemists already suspect its’ molecule is similar to the partially-characterised aniline molecule in coal tar (“creosote”, the feedstock for their experiments, available due to distillation of coal for gas to light cities). Organic chemistry’s leading light Prof. Dr. Hoffman in Prussia was hired away to Imperial College. His British lab manager (Dr. Perkins) instead accidentally discovers the world’s first water-fast purple dye searched for thousands of years. In short, the petrochemical industry began, & remains, dependent on the SAR technique that pretty much defines organic chemistry See: Borzelleca ’94

    1880s
    This new petrochemical industry rapidly saturates this small dye market; so naturally it expands by applying its SAR business model to…

    1890s
    …botanicals, with thousands of years of quasi-SAR data (i.e. which botanicals/animal products have health benefits). To begin, heroin, cocaine, & aspirin were derived by applying SAR to existing botanical knowledge of opium plants, coca plants, and willow bark. 1st crucial historical event: moving from dyes into the synthetic pharmaceutical business meant they now they had to test safety, not just efficacy … See: Borzelleca ’94

    1905
    …accordingly, the petrochem industry invents the dose-ranging method, where the results from each previous toxicity test informs the dose for the next lower dose/longer-exposure test. Crucially, this ensures that the chronic expos. tox. test’s “no effects” dose is unrealistically high (it really denotes the end of long-term poisoning that starts with higher acute doses).  Ironically, two industry labs use this method to publish that that their original (still popular today) feedstock molecule, aniline, is carcinogenic–in the first use of this new standard 2-yr. “chronic” toxicity test! See: Hutt ’97

    Early to mid 20th Cent.
    Petrochem industry to this day uses exactly this test protocol for all safety testing, [Note, its core synthetic pharma sector is why the petrochem industry helped create, at most unis globally, ‘Pharmacology & Toxicology’ (or vice-versa) departments, to people government agencies]; …as SAR forever yields tens of thousands of profitable molecules.
    Separately & in parallel, zoology and other biology academics to this day hypothesize the effects of actual (low) exposure levels to synthetic petrochems, requiring the gradual development of varied low dose toxicity test methods. Currently, they publish ~ three low dose, in vertebrate, findings every day (the nature of biochemistry is to be changed by ultra-close proximity to molecules it did not evolve with).

    1976 -1982
    SYNTHESIS: Industry’s tox tests become regulators’ (2nd crucial event)
    The whistleblower’s Industrial BioTest (IBT) & Craven scandals at large labs supplying industry with toxicity test findings to authorize petrochems for market: hundreds of commercial petrochems had their safety findings falsified by the grossest fraud possible.
    Looking to restore a sheen of authority about safety, FDA at a conference hears how AU & NZ had to create their own lab standards (reagents, etc.) when Japan blockaded them from their western allies in WW2, naming them GLP. J ust what USFDA needed!: the rule they wrote after IBT/Craven finally made industry’s toxicity tests transparent to regulators, to the extent that results–not just the methods–had to be replicable at another lab. Regulatory capture “averted”: EPA & FDA jointly promulgate GLP.

    OECD’s mission is to reduce regulatory trade barriers–e.g. health & safety regulations. Sector by sector, they look for chances to standardize rules to a lowest common international denominator possible, to increase trade. By mid-1970’s their Environment Directorate observed US, Netherlands, Japan and a couple others passing laws for pre-market approval of petrochemicals (stillborn in US for TSCA, and I suspect similar elsewhere), with the motive that a single chemical could require approval many times in many countries). OECD convened a committee to help create a global (OECD) regulation of standard pre-market petrochemical authorisation requirements. 3rd crucial historic fact: after the initial 1978 Stockholm meeting, the NGOs NRDC, EDF and EEB told OECD they had better uses for their time! So only industry remained, to offer the OECD use of their toxicity test methods. USA had just created GLP, so OECD included it in the new standard ‘Guideline’ test methods; adding a longed-for skein of scientific rigor. An OECD Mutual Assurance of Data (MAD) Directive was co-promulgated: regulators must order a new Guideline method for any new toxicity test that they require (i.e. “forced” industry to use their own test methods!), and any member country must accept this test results in regulating the chemical in another OECD country. Finally, an ongoing OECD program gets non-OECD countries to adopt by bilateral treaties this unified pre-market authorisation system. See: Lee; ’85; Viser, 2016

    IMPLICATION: it’ll be a cold day in hell before regulators consider academe’s findings: it’s in their bones.

    Since it’s either 1) already the explicit law, 2) implicit in the logic of RA, or 3) in the formal guidelines of systematic review (SR) …to evaluate all data; we must force regulators to evaluate academe’s findings.  See the PRISMA-S SR guideline: it literally says a RA has already failed when it doesn’t evaluate all data (https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-020-01542-z&ved=2ahUKEwion9XWm9uGAxX2UaQEHbpGBfQQFnoECBwQAQ&usg=AOvVaw0cTdLxeN1Z-Pa_fplhfAWx).

    Beer JJ. 1958. Coal tar dye manufacture and the origins of the modern industrial research laboratory. Isis 49: 123–131.

    Borzelleca J. 1994. History of toxicology. In Principles & Methods of Toxicology, Hayes W (ed). Raven Press: NYC.

    Hutt JB. 1997. The historical development of animal toxicity testing. Student 605: 7180.

    Lee CM. 1985. Scientific considerations in the scientific arena. In Aquatic Toxicology and Hazard Assessment, Bahner RC, Hansen DJ (eds). American Society for Testing Materials: Philadelphia, PA; 15–26.

    Viser 2016 personal communication Jan. 2016. (he was Director of OECD/s Env. Directorate, he organized the 1978 meeting to create the Test Guidelines, and told me the NGOs were uninterested.
    —-

  4. A re-write of my poor above introduction:

    Regulator authority was challenged by the IBT lab scandal, thus they believe their fix (the insensitive to find toxicity guideline test methods of the OECD (US EPA’s are identical)) are so reliable that they will never care about the ~3 low dose in vertebrates toxicity findings that academics publish every day. Literally none of these ~25,000 (and accelerating) low dose in vivo findings (or any other supporting findings) are ever evaluated for their reliability in RA done for market access (pesticides, REACh, TSCA, etc.)… even when laws require that all available data be evaluated.

    This solution to promote hazard assessment alone (regardless of exposures: risk) would fail to make any difference, since that is the side of RA that is being ignored; and the 2009 EU pesticide is the only one in the world with a hard mandate to prefer the precautionary principle; and both REACh and the EU pesticide regulation already require hazard-only assessments (for what they think are the most risky chems). Yet, these RAs are shown to be just as bad in ignoring the only realistic data, academe’s.

    Yet all stakeholders (even the scientists producing this data) are so innocent of phenomenal situation this that they’ve ignored it for 12 years when informed (random-selection audits of REACh registrations, EU pesticide re-authorisations, and the recent TSCA evaluations of 10 huge volume chemicals, prove this). Until people try to take this argument to regulators & their bosses, nothing can change.

Leave a Reply

Your email address will not be published. Required fields are marked *

About William

William Boyd is Professor of Law at UCLA School of Law and Professor at the Institute of the Environment and Sustainability. He is the founding Director of the Labora…

READ more

About William

William Boyd is Professor of Law at UCLA School of Law and Professor at the Institute of the Environment and Sustainability. He is the founding Director of the Labora…

READ more

POSTS BY William