Climate Science and Law For Judges

Part One: Scientific Foundations of Climate Change

How Climate Science Works

Authors
Environmental Law Institute
Stanford University
Published
Line graph.

I. Introduction

Global warming is real, human-caused, and a destabilizing force of massive consequence to humanity and the planet. This chapter discusses the process by which scientists have established consensus on these fundamental scientific findings. 

Science textbooks often describe the process of doing science with an introductory section on the “Scientific Method.” A hypothesis leads to observations and analysis, against which the hypothesis is tested and revised. At the level of a single researcher or even a small research group, this simple scheme is a useful, basic way to think about how science works. 

But in a multidisciplinary field such as climate science that explains interconnected systems covering the whole planet and how those systems are changing over time, the process of science is more complex. Climate scientists draw upon a much larger tool kit to investigate and uncover facts of nature. Methods range from “fingerprinting” the effects of human activities on observed atmospheric temperature trends; to measurement of carbon exchanges or “fluxes” between the atmosphere, oceans, and land and how these are altered by human activities; to analyzing indicators of climate variations in Earth’s deeper history. Such methods allow for much more sophisticated approaches to understanding the richness of our planet’s climate and its interactions with society. 

Scientific attribution is only one factor in arguing a case in a court of law, but it is crucial to establishing causality in events where there is an alleged relationship of climate change to consequences.1 (See Drawing the Causal Chain Module). Climate science findings are often couched in the language of probability, which also enables climate scientists to articulate their level of certainty. Climate is by definition the average of weather over time and thus is inherently statistical. Statements about changes in averages, extremes, and the likelihood of events are expressed as probabilities, and in assessment reports are often accompanied by quantitative evaluations of their confidence levels. Attribution of impacts of climate change and their consequences to emissions of greenhouse gases is likewise expressed probabilistically.

Establishing scientific consensus (see Box 1), which is the gold standard for scientific findings, goes well beyond what the courts have required for scientific testimony to be deemed reliable, but it is a central approach in the enterprise of climate science. Consensus means that virtually the entire scientific community has come to accept the validity of a fact. While “general acceptance” enshrined in the so-called Frye legal standard for reliability of court testimony is still applicable in many state courts, the more articulated Daubert standard of the federal (and some state) courts calls for the evidence to be grounded in scientific methods and procedures. The courts have noted that these are not clearcut but have characteristics signified by “indicative factors,” among which are indeed general acceptance, but also peer-review, testability, and others.2 Thus, scientific consensus exceeds the legal standards of reliability, and when it is achieved offers an excellent mark of reliability.

  • 1Michael D. Mastrandrea et al., IPCC, Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties (2010), https://www.ipcc.ch/site/assets/uploads/2017/08/AR5_Uncertainty_Guidance_Note.pdf.
  • 2See the three main cases that articulate the standard: Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993); Gen. Elec. Co. v. Joiner, 522 U.S. 136 (1997); Kumho Tire, Ltd. v. Carmichael, 526 U.S. 137 (1999); see also Margaret A. Berger, The Admissibility of Expert Testimony, in Fed. Judicial Ctr. & Nat’l Rsch. Council, Reference Manual on Scientific Evidence 12 (3d ed. 2011).

Box 1. Scientific Consensus

Scientific consensus emerges through the combined force of formal and informal institutions of science, including peer-reviewed journal articles, broader reviews of scientific research, and comprehensive assessments of the state of knowledge on key issues, academic colloquia, and even hallway discussions. Science is undergirded by rigorous and objective procedures and standards of evaluation, but in the end its conclusions must be validated by the scientific community.

When the scientific community deems scientific findings to be sound—using these procedures and their own expert judgment—the wider public, and judges, have reason to view them with confidence. Processes developed by the Intergovernmental Panel on Climate Change (IPCC) for its assessment reports and the U.S. National Academies of Sciences for their consensus reports are prime examples of this way of coming to consensus, and thus to scientific fact.

This module reviews the methods by which climate science arrives at conclusions, including the gathering of evidence and the construction of mathematical models. It discusses some key differences between procedures in science and the law, and it shows that climate science is built on long-established scientific disciplines, with continuous improvement in understanding of the climate system through applying them. It then explains how several kinds of scientific misconceptions persist, a phenomenon that is not exclusive to the science of climate, and it finally considers how climate scientists reach consensus views through scientific institutions.

II. Scientific Inquiry Compared to Legal Fact-Finding

Reason together with evidence are the essential ingredients of finding the truth, in both science and law. At the most basic level, science and law share the approaches of weighing evidence and developing rational and rigorous argument about it. One expert member of a National Academy of Sciences committee of judges and scientists meeting in 2021 to consider scientific issues in legal proceedings observed that law is an applied science and science is an important partner to the rule of law. In law, as in science, the point is to separate emotion and reason. Emotion can lead humans to think things that aren’t true, which in turn may lead to faulty decisions.1

It is an article of faith in law as in science that there is one objective reality of the observable world. That such a reality exists independent of the observer is the principle underpinning our faith that scientists and jurors can come to truthful conclusions. Yet in their practices, science and legal proceedings diverge in the way they find fact. While U.S. courts seek to find the underlying truth of the matter to be decided, in the adversarial process of this country they are also admonished to weigh only the evidence that the parties present. By usual rules of evidence, as meant in the legal sense, if neither party to a case submits a relevant piece of evidence, even if that piece may be crucial to the outcome of the case, the court cannot consider it. Law explicitly seeks truth and finality, in which a dispute between the parties is resolved by the court in reasonable time and without the likelihood, except under rare circumstances, of revisiting for new evidence. In contrast, the process of seeking the objective reality of the natural world is open-ended and in truth never-ending. Science is the perpetual pursuit of a deeper and more precise understanding of nature through reasoning, discovery, and the application of verifiable facts. Science is not constrained by a duty to weigh the preponderance of evidence or judge beyond a reasonable doubt to come to a decision that is time-bound. Rather, it aspires to the standard of accurately describing and explaining the phenomena, however long that takes and impossible that may be to achieve. That, arguably, no scientific explanation meets this perfect standard does not lower the aspiration. As the philosopher of science Karl Popper argued,2 science moves forward through the replacement of imperfect explanations by ones with greater “verisimilitude,” that is, greater approximation to the truth.3

In contrast, U.S. Supreme Court Justice Stephen Breyer regarded the legal process in his Introduction to the Third Edition of the Reference Manual on Scientific Evidence as a pragmatic one. “The search is not a search for scientific precision. We cannot hope to investigate all the subtleties that characterize good scientific work.” “The law must seek decisions that fall within the boundaries of scientifically sound knowledge,” but it seeks something else—a decision—and one that is just, within the process of law, and different from acquiring knowledge for its own sake.4

Science values qualities that instill confidence that its explanations approximate reality. Three such qualities offer tests of the robustness of scientific knowledge. First and simplest is that the theory can explain a wide range of observations and account for many phenomena. So, for example, when plate tectonics emerged as a credible explanation of geological processes in the last third of the 20th century, one of the reasons for its rapid acceptance after decades of skepticism was that it neatly explained several apparently independent sets of evidence, including the configurations of the continents, trenches in the oceans, relative movement of large landmasses, similar rock formations and fossils in widely separate locations, and more. It provided a powerful explanatory framework for understanding what geologists had observed in many places on the ground over several centuries.5

But for a theory to stand up to skeptical scrutiny, it must offer more than just explanation. To gain acceptance, a scientific theory must also be able to predict outcomes under given conditions in a consistent and robust way. An example of the predictive value of theory is the routine use of Einstein’s two Theories of Relativity (Special and General) in the Global Positioning System (GPS) used to find location on mobile phones. Because of the high speed of GPS satellites in Earth orbit, about 14,000 kilometers per hour, and also because they are in a high Earth orbit of about 20,000 kilometers, onboard clocks that measure distances for GPS receivers must be adjusted by small variances from their counterpart clocks on Earth.

The Special Theory of Relativity precisely predicts how much a satellite-based clock will appear from Earth to tick slower than an identical clock on Earth because of its speed of movement, while the General Theory of Relativity equally precisely predicts how much the satellite-based clock will appear to tick faster than the Earth-based clock because of the Earth’s weaker gravity where the satellite orbits. The two effects must be added, and the resulting total correction predicted by the theories is a small but crucial adjustment of signal timing to get the GPS position right. The watchword here is “precisely,” because if the mathematical theories were wrong even by a small deviation, reported locations would rapidly depart from the actual and the errors would render the technology useless. Einstein’s two Relativity theories are validated for their predictability millions of times every day.6  

A corollary property, and a third attribute of scientific explanation, is that a finding must be reproducible and replicable for it to be considered reliably established. These qualities are what the National Academies of Sciences, Engineering, and Medicine call “hallmarks of good science.” In 2019, the National Academies undertook a consensus study performed by a blue-ribbon panel of leading scientists to address a growing concern that many scientific results, especially in medical and social sciences, failed one or both of these tests. As the NAS defined them, reproducibility means “obtaining consistent results using the same input data, computational steps, methods, and conditions of analysis.” Replicability takes a wider view. It means “obtaining consistent results across studies aimed at answering the same scientific question” where the data are unique to each study.

The NAS report posed the problem it was addressing: “When a scientific effort fails to independently confirm the computations or results of a previous study, some fear that it may be a symptom of a lack of rigor in science, while others argue that such an observed inconsistency can be an important precursor to new discovery.” It asserted that reproducibility should be expected for computational systems (like satellite time adjustments for Relativity), while lack of replicability may indicate hidden factors or incomplete understanding of complex systems of experiment and measurement rather than error.7 It follows that a high standard for answering a scientific question is that the answer meets both criteria, but that failure to meet the replicability criterion may not invalidate the result.

III. The Field of Climate Science Is Based on Established Scientific Disciplines, Methods, and Approaches

In complex systems, there are always limits on predictability, but those limits and the consequences of specific future scenarios can be determined. While we may not know exactly how the Earth system will react to a given future trajectory of greenhouse gas emissions, we can know quite well what effects global warming will have, and quite a bit about how fast and how far those effects will go in a given emissions scenario, within quantifiable uncertainty bands. And thus, science can analyze implications of decisions taken now on future outcomes.

One cannot know the future state of the Earth system of air, ocean, land, and ice that determines the climate system with the same precision as orbital mechanics provides in GPS. First and foremost, such future outcomes are dependent on human decisions and actions that collectively determine the trajectory of global greenhouse gas emissions and societal development. Second, there are inherent uncertainties in measuring the feedbacks that determine the amount of heating of the Earth system that arises from a given amount of greenhouse gas emissions, the amount of absorption of such greenhouse gases in the atmosphere by the oceans and vegetation, the melt rate of major ice sheets in Greenland and Antarctica, and many more variables at the global scale.

While human decisions and actions cannot be predicted with precision (look no further than the events of the past several years), scientists can instead produce projections of future outcomes under specific scenarios of future greenhouse gas emissions and other human-controlled factors like restoration of forests to increase carbon dioxide (CO2) absorption. Comparing such scenarios also allows scientists to evaluate the consequences of different trajectories.

What enables this analysis is an extensive and growing understanding of the mechanisms and dynamics governing the Earth system. The foundational elements of climate change, such as global average temperature increase in response to rising greenhouse gas concentrations, are grounded in basic physical processes and are not in doubt. To be sure, the combined effects of heat and carbon exchange and the physical movements of fresh and salt water within and among different mediums (land, atmosphere, oceans) are complex. But even here, the theories of thermo- and hydrodynamics, of phase change, and basic geochemical processes that underpin them have been well understood for hundreds of years, and their mechanisms are fully explainable in principle.

Yet their very complexity introduces a cascade of uncertainties that limits scientists’ ability to precisely chart their evolution. And the interactions of still more complex phenomena such as deep ocean currents, Antarctic ice interactions with the Southern Ocean, and disturbances of the jet stream, are not fully predictable for the same reason. At this point, exact solutions for the behavior of the Earth system become impossible. Because of the many possible outcomes from a given set of initial conditions of the system, the approach must shift from exact analysis to weighing probabilities of particular outcomes. This “probabilistic” picture is constructed using mathematical models.

In this approach, computer models apply physical theory to generate simulations of possible outcomes. That theory is represented by dynamical equations of large-scale interactions, and the outcomes are determined by initial conditions that are established by real-world observations. We will discuss computer modelling in greater depth in Section 7.

The term “theory” has meanings in science that are very different than in common usage. A scientific theory is not a speculation or an assertion untethered from facts—it is rather a rigorous articulation and quantification of how a system works, which must stand up to comparison to the observed behavior of the world. It possesses the three attributes described above—explanatory value, predictability, and reproducibility. And it has at least one additional quality—such specificity and rigor that it may be disproved. That is, as Karl Popper asserted and the Supreme Court underscored, it is said to be falsifiable.8 In climate science, the main theories involved are geophysical, and they have been constructed from long-accepted theories. Together they represent a well-established body of knowledge of the dynamics of the climate that is tested in the crucible of scientific analysis and review (see Box 2).

  • 1See Nat’l Academies of Sci., Eng’g & Medicine, Emerging Areas of Science, Engineering, and Medicine for the Courts: Identifying Chapters for a Fourth Edition of the Reference Manual on Scientific Evidence—Virtual Workshop (Feb. 24-25, 2021), https://www.nationalacademies.org/event/02-24-2021/emerging-areas-of-science-engineering-and-medicine-for-the-courts-identifying-chapters-for-a-fourth-edition-of-the-reference-manual-on-scientific-evidence-virtual-workshop.
  • 2Karl R. Popper, Conjectures and Refutations: The Growth of Scientific Knowledge 37 (5th ed. 1989) (“’[T]he criterion of the scientific status of a theory is its falsifiability or refutability, or testability.’”); see also Daubert, 509 U.S. at 593 (citing the Popper quote).
  • 3David Goodstein, How Science Works, in Fed. Judicial Ctr. & Nat’l Rsch. Council, Reference Manual on Scientific Evidence 40-41 (3d ed. 2011).
  • 4Stephen Breyer, Introduction to Fed. Judicial Ctr. & Nat’l Rsch. Council, Reference Manual on Scientific Evidence (3d ed. 2011).
  • 5See, e.g., Plate Tectonics: An Insider’s History of the Modern Theory of the Earth (Naomi Oreskes ed., Westview Press 2003) (2001).
  • 6See Eric Lander, Talk at the National Math Festival Gala Dinner at the Library of Congress (Apr. 16, 2015) (transcript available at the Mathematical Science Research Institute website); Richard W. Pogge, Real-World Relativity: The GPS Navigation System, The Ohio St. U. Astronomy Dep’t (last updated Mar. 11, 2017), https://www.astronomy.ohio-state.edu/pogge.1/Ast162/Unit5/gps.html.
  • 7Nat’l Academies of Sciences, Eng’g & Medicine, Reproducibility and Replicability in Science (2019).
  • 8See supra note 4.

Box 2. Observational Data

Such knowledge of the Earth system derives as well from an awesome increase in the power of observation brought about by revolutionary new instrumentation developed in the past few decades. Imaging, gravity measuring, and positioning satellites; tidal gauges; oceanographic floats and autonomous undersea vehicles; transit surveys by commercial and research vessels; fixed atmospheric and oceanographic monitoring stations; and many more platforms and remote sensors have generated an explosion of accurate and constantly accumulating observational data, which has been matched by an equally extraordinary increase in capabilities of computers to process it (see Figure 1). “Observation” has taken on a new meaning with big data.

Four maps, showing improvement of climate model resolution from 1970 to 2000s. Maps are labelled: FAR, SAR, TAR, and AR4, with the first map created with big blocks of color, and the last map composed of much smaller blocks.

Figure 1. Illustration of improvement of climate model resolution from the 1970s to 2000s. Source: Nat’l Rsch. Council, A National Strategy for Advancing Climate Modeling 64 (2012).

Now climate scientists can precisely track changes in the salinity of the oceans, variations of the jet stream in space and time, growth and retreat of ice at the poles, and many more global, regional, and even local dynamics. Thus, they are so much better equipped to set initial conditions for theoretical models of the Earth system. And the models themselves yield higher quality simulations of the Earth’s climate as great strides in computer processing allow higher and higher spatial resolutions of climate variables such as temperature on global and downscaled regional grids.

It is through these models that scientists experiment on the climate system. By constructing models of the Earth system, scientists may create alternative, “counterfactual” worlds whose responses to changes in inputs can be gauged without actually perturbing our one and only home.

We saw in the spring of 2020 an example of what dramatic changes can result from planetary-scale alteration of global conditions. As people massively curtailed their use of vehicles and major economies slashed their energy consumption in response to the Covid-19 pandemic, humanity undertook a forced “natural experiment” whose result was plain to the eyes: The air cleared, sunsets shone brighter, skies turned deep blue, and distant vistas came into view for the first time in decades. For a moment, the counterfactual world of clean air became the real one. Under normal circumstances, such a major perturbation of the actual Earth system would have been unthinkable, but a well-founded global climate model simulation including the aerosol effects on scattering of light and transmission would have revealed the same effects. Ironically, the unprecedented conditions of a pandemic lockdown underscored the importance and power of the method of climate modelling in normal times.

IV. Climate Science Has Seen Continual Improvement, With Caveats

Climate science is a process of increasing clarity and depth of understanding as well as precision of measurement. But even in this course of normal science there was not always a simple progression toward better understanding. One relevant example of discontinuity arose in the question of the likelihood of a catastrophic rise in sea level from climate change. Princeton’s Michael Oppenheimer, Harvard’s Naomi Oreskes, and their colleagues have carefully documented the evolution of understanding of the contribution of the West Antarctic Ice Sheet to estimates of projected sea-level rise from the late 1970s to 2014.1 The rate of melting is an enormously important factor in those estimates. Over most of this period, that rate was highly uncertain and sometimes completely disregarded.

According to their account, in a set of reports of ad hoc workshops, early official climate assessments, and the large-scale international assessments by the IPCC, leading scientists disagreed about when the sheet might disintegrate (and fall or melt into the sea) under expected conditions of emissions-driven climate change. The prevailing view in the early 1980s held that disintegration would be highly unlikely for at least two centuries. But a few scientists, including the highly respected oceanographer Roger Revelle, had argued for a substantial risk of this happening earlier. The question lingered for nearly three decades.

For many components of sea-level rise, including melting of mountain glaciers and thermal expansion of ocean water, science can provide a central estimate with well-characterized uncertainties more-or-less evenly distributed above and below. For the West Antarctic ice sheet, though, there is a likelihood of threshold behavior, with the threshold-crossing difficult to project but highly important for the future.2 Part of the problem was a lack of good measurements of what in fact was happening to the Antarctic ice. Equally vexing was the imprecision of early computer models.

Without consensus among the scientists, IPCC reports factored in no projected sea-level rise from Antarctic ice disintegration. But in 2014, as Oppenheimer and his co-authors recounted, “Based on the outcomes of improved modeling and additional observations of ice sheets and sea-level, [the IPCC] projected the contribution of Antarctic ice flow to twenty-first-century sea level rise for the first time.”3 The total projected rise jumped by a startling 60%. The authors’ point centers on the conservatism of the consensus process of the climate science community toward “erring on the side of least drama”—viewed as a critical requirement for building credibility with policymakers—which led to a gross underestimation of the rise. An increase in observed data and continuous improvement of the models added sufficient confidence to the scientists’ expert judgment to reach the tipping point of judgment—to incorporate for the first time a contribution of Antarctic ice melting. A point to be made for judges is that at the grossest level the uncertainty in results of climate models is more about timing and impacts than about the underlying causal processes. In legal terms, it goes to the size of damages and not to the theory of the case.

Iterative improvements in the science led to increases in confidence in the assessments of the human contribution to climate change, sometimes with extraordinary implications as in the example above. As shown in the figure below, IPCC climate science reports have evolved in barely three decades from uncertain to “indisputable” that excess greenhouse gases from human activity are the principal cause of warming the Earth.4 Because of the speed of this change in knowledge, it is crucial for judges to know if a particular climate scientist’s views rely upon recent scientific findings when judging their reliability.

  • 1Michael Oppenheimer et al., Discerning Experts: The Practices of Scientific Assessment for Environmental Policy 127-69 (2019).
  • 2Nat’l Acad. of Sci., Eng’g and Med., Abrupt Impacts of Climate Change: Anticipating Surprises (2013).
  • 3Id., at 164.
  • 4See IPCC, Climate Change: The IPCC 1990 and 1992 Assessments (1990); IPCC, AR2: The Science of Climate Change (1995); IPCC, Third Assessment Report (2001); IPCC, AR4 Climate Change 2007: Synthesis Report (2007); IPCC, AR5 Synthesis Report: Climate Change 2014 (2014); IPCC, AR6 Climate Change 2021: The Physical Science Basis (2021).
Timeline with IPCC climate science report covers, with captions describing each cover.

Figure 2. Source: Adapted from Ben Santer, Lawrence Livermore Nat’l Lab’y, NASEM Workshop on Evidence for the Courts: Emerging Issues in Climate Science 2 (2021).

Gary Yohe of Wesleyan University, a leading climate economist, wrote the module in this curriculum on climate risk and economic costs associated with climate change. In it, he points out that the climate science community, in view of large and compounded scientific uncertainties, has recommended an iterative risk-management approach to deciding where and how much to invest in climate mitigation and adaptation. Because of very large uncertainties in early climate impact assessments, risks could not be quantified precisely and new data on those impacts were expected to improve estimation of those risks over time. But given the potential for catastrophic and irreversible impacts, those uncertainties were not viewed as a reason to delay action. Confidence grew as both evidence and agreement steadily increased, strengthening the rationale for investment in specific actions. The implications of this approach are crucial for making choices for climate action.

Often greater understanding comes from new ways to collect and visualize data. Consider, for example, what former Lawrence Livermore National Laboratory climate scientist Benjamin Santer calls “climate fingerprints”—patterns of change in temperature or other variables that are identifiers of the effects of human activities on climate. Scientists can draw these patterns through statistical analysis and application of climate models. If observed global heating were from an increase in solar intensity (as some have argued in the past), the laws of radiative transfer would require the temperature of the upper atmosphere to rise more than that of the lower atmosphere—the reverse of what Santer’s fingerprint shows.1 Patterns such as those shown in the figure below both help to validate the theory that human activities are causing the heating and, as the title declares, also serve to eliminate alternative explanations.

Climate science, like other scientific endeavors, relies upon the active interplay of theory and observation. Theory suggests what to observe and measure, while the results of these measurements play back into validating (or falsifying) the theory. And as noted above, the theory itself must stand up to the test of its predictive ability. Does the theory predict phenomena that have not been observed so far? And if its predictions deviate from observations, can the theory be revised plausibly to account for the observed behavior, or must the theorists start over?

  • 1Ben Santer, Lawrence Livermore Nat’l Lab’y, NASEM Workshop on Evidence for the Courts: Emerging Issues in Climate Science 3-7 (2021).
Graph of satellite data showing a temperature fingerprint that is inconsistent with natural causes.

If surface warming were from the sun, the altitude variation of red to blue would be reversed.

Figure 3. Source: Adapted from Ben Santer, Lawrence Livermore Nat’l Lab’y, NASEM Workshop on Evidence for the Courts: Emerging Issues in Climate Science 5 (2021).

Answers to such questions may hinge not so much on conceptual understanding of the problem as on weighting of a critical factor. So, returning to the case of the West Antarctic Ice Sheet, for example, a significant, recurring question was whether the total mass of ice in Antarctica was declining or increasing. In 1980 and over more than two decades that followed, many researchers thought snowfall in the interior would likely exceed the loss of ice through melting and so result in a net gain. But in this period neither theoretical understanding nor direct observations could establish definitively which process was stronger—ice formation or depletion.

By 2007, new data and new calculations were pointing against the long-accepted view of a stable or growing ice mass.1

While theory and observations had failed to offer a definitive statement about the net contribution of melting ice to sea-level rise, that failure drove a new research agenda—a new way of observing mass displacement via gravity-measuring satellites. Indeed, such satellites had been conceived in part to address the very question of polar mass displacement.

Thus an emerging conviction of net loss of ice mass became firmly established in the first decades of the 21st century with measurements of mass displacement from Antarctica and Greenland, the two largest ice sheets on the planet, derived from an elegant system of two gravity-measuring satellites called GRACE.2  

With this new information, experts could more precisely model the melting of the ice sheet and so be far more certain of its substantial ongoing contribution to sea-level rise. The consensus estimate by the IPCC then jumped as the former status quo view gave way to new evidence. Through this iterative process—the interplay of theory and observations and the opening of a new research path to provide independent measurements—a picture emerged that firmly established the sign of the contribution of Antarctic polar ice.

V. Method: Independent Lines of Evidence

How did climate scientists come to their understanding that climate change is real and human-caused? Science values separate lines of evidence that independently support a theory. Agreement of independent lines gives investigators confidence that they are uncovering an objective reality, independent of the way (or the group from which) the information was gathered.

One line of evidence in climate science is from geological records. While climate changes can be inferred from geological evidence reaching as far back as hundreds of millions of years, more recent evidence such as lake sediment cores and ice cores offers strong indirect (“proxy”) measures of atmospheric carbon dioxide (CO2) and temperature. Cores extracted from ice sheets and glaciers near the poles, the oldest of which reach back about 800,000 years, reveal patterns of natural fluctuation of both atmospheric CO2 and temperature, which correlate closely to each other. They also establish a range over which atmospheric concentrations of CO2 and other greenhouse gases varied in the extended period prior to changes due to human activities.3

Line graph comparing levels of atmospheric carbon dioxide and Antarctic temperature.

Figure 4. Source: Nat’l Acad. of Sci. & the Royal Soc’y, Climate Change: Evidence & Causes Update 2020 10 (2020) (based on figure by Jeremy Shakun, with data from J. Jouzel et al., Orbital and Millenial Antarctic Climate Variability Over the Past 800,000 Years, 317 Sci. 793 (2007) and Dieter Lüthi et al., High-Resolution Carbon Dioxide Concentration Record 650,000-800,000 Years Before Present, 453 Nature 379 (2008)).

Graph showing changes in carbon dioxide over time.

Figure 5. Graph showing changes in carbon dioxide over time. Source: Benjamin Strauss, The Carbon Skyscraper, Climate Central (Jan. 13, 2021), https://www.climatecentral.org/report/the-carbon-skyscraper-->.

The historical period of the last 170 years began with the introduction of intensive burning of fossil fuels at the start of the Industrial Revolution. Since 1850, the CO2 concentration in the atmosphere has shot up from about 280 to over 400 parts per million (ppm)—an unprecedented rate of increase in the last 800,000 years, as shown on the “Carbon Skyscraper” graph above (see Figure 5)—and far above the previous range of natural variation.1  

Since 1958 when the Mauna Loa Observatory in Hawaii began measuring atmospheric CO2 (see Figure 6), concentrations have increased about 100 ppm or about 32%. These numbers are known to a high degree of accuracy with modern instrumentation and direct measurement techniques. They agree with the geological evidence of CO2 concentrations in that they connect smoothly to the older ice core data.2

Line graph showing average carbon dioxide in the atmosphere at the Mauna Loa Observatory

Figure 6. Graph showing average carbon dioxide in the atmosphere at the Mauna Loa Observatory, Hawaii. Source: NOAA, Global Monitoring Laboratory, https://gml.noaa.gov/ccgg/trends/--> (data current as of September 5, 2022).

Thus, a second line of scientific evidence is the direct measurement of greenhouse gases in the atmosphere (see Figure 7). This may be compared to the observed increase in the average temperature of the Earth’s surface over the same period. In fact, since the Industrial Revolution began to accelerate emissions of CO2 from burning fossil fuels, average global temperatures have risen about 1.2 degrees Celsius, as shown in the figure below

Line graph showing global average temperature from 1850-2020

Figure 7. Graph showing global average temperature from 1850-2020. Source: Berkeley Earth.

As both scientists and litigators are sure to point out, however, correlation does not prove causation. If we are to believe that increased CO2 concentrations are the actual cause of increased atmospheric heating, we need to understand the mechanism of that heating—what is driving the temperature up. That understanding derives from the laws of thermo- and hydrodynamics and from well-understood heat-trapping properties of greenhouse gas molecules.

To systematically investigate the effects of these mechanisms on the Earth system, climate science has developed a third line of evidence: climate models. Models are numerical representations of the complex reality of the Earth system (as modelers are fond of saying). We have mentioned they are constructed using basic equations of motion, heat transfer, and fluid flow that have been known for about 200 years. These equations are “dynamical”—they describe how the forces in the system drive movement of matter and energy that accounts for the evolution of the climate. These equations thus represent physical understanding of the drivers of warming. Their success in representing reality, and predicting the future, provides evidence of the causal connection of the drivers to atmospheric heating.

With ever more refinement, climate models now show quite unambiguously the different temperature trajectories of the real world with greenhouse gas emissions and a counterfactual world without emissions.

Graphs showing global mean temperature change.

Figure 8. Source: T. Knutson et al., U.S. Global Change Research Program, Detection and Attribution of Climate Change, in Climate Science Special Report: Fourth National Climate Assessment, Volume I (Fig. 3.1).

In the world as we know it (depicted in the graph on the left in Figure 8), a large ensemble of climate models (depicted as the orange shading and orange solid line) show substantial agreement with each other and with three independent estimates of observed global temperature, within the bounds of uncertainty. They produce a composite timeline of temperature increase over time that closely correlates to observed temperature changes. In the counterfactual world without greenhouse gas emissions from human activities (depicted in the graph on the right), the models (depicted as the blue shading and black solid line) produce a roughly horizontal line of temperature change centered on zero. Without the excess greenhouse gases, models show no rapid warming and diverge substantially from observations in recent decades.

The evidence from this third line of reasoning, together with the separate geological and historical records and a range of additional “fingerprints,” has led to broad consensus among climate scientists not only that the Earth is warming but also that the human activity of burning fossil fuels is its principal cause.

When multiple independent lines of scientific evidence converge to support a single explanation, they are said to exhibit a quality that evolutionary biologist Edward O. Wilson referred to as consilience.1 The pursuit of independent lines of reasoning to establish the reality of climate change is one example (see Box 3).2

  • 1William Whewell, The Philosophy of the Inductive Sciences (1840); Edward O. Wilson, Consilience: The Unity of Knowledge (1998).
  • 2Wilson extended this scientific idea to other spheres of human endeavor. Consilience means in William Whewell’s definition as Wilson described it the convergence of “knowledge by the linking of facts and fact-based theory across disciplines to create a common groundwork of explanation.” Wilson, Consilience, supra note 21, at 8. In Wilson’s wider frame, such a groundwork could encompass ethics, biology, social science, environmental policy and even the arts and humanities. Id., at 8-14.

Box 3. Independent Lines of Evidence

Professor Wilson might encourage us to think how this convergence of independent lines of evidence could apply in court as well. Take a stylized example of three independent lines of evidence that examine the likelihood that a particular event, like a heat wave, was made more intense due to greenhouse gas emissions. Each approach might yield a certain low probability of the extraordinary event happening without climate change, each within specified uncertainty ranges. Because we are positing that the analyses are independent of each other, the mathematical probability of the combined error in the three analyses would be simply the product of the probabilities of each of the three errors, expressed as fractions.

Hypothetically, if each of the probabilities of error were say 10%, that is one chance in 10 for each independent result to be wrong, there would be only one chance in 1,000 for all three results coming to the same conclusion to be wrong. The answer is 99.9% reliable! This is a quantified argument for why three independent lines of evidence in a court case as well, provided they all agree, should give a judge and jury far greater confidence in the finding than would one line alone.

Conformance to a theory is not enough to convince scientists that the theory is correct. After all, it might be possible to find an alternative theory that equally well or better explains the evidence. Indeed, science has ingrained in its method the imperative to weigh every reasonable competing explanation to obtain the closest approximation to the truth. As we shall see in Section 8, for example, the evidence that climate change is not caused by an alternative source such as an increase in solar brightness or “irradiance” as scientists call it, provides greater confidence that greenhouse gases are the main driver of Earth-system heating.

Still, if a viable competing idea emerges, climate scientists must and will test it as a rival to the established view. As one leading climate scientist remarked in a judicial education seminar in 2019, there would be no greater scientific achievement than to refute the theory of greenhouse gas-driven warming. Unfortunately, he continued, this is not going to happen in view of the multiplicity and accumulation over time of an overwhelming amount of independent evidence.1

  • 1Stephen W. Pacala, Judicial Seminar of the Climate Judiciary Project presented at The George Washington University Law School (July 12, 2019).

VI. Method: Conceptual Understanding and Uncertainty

It is this dynamic—the cumulative development of a mainstream view—that gives the findings of climate science a high degree of reliability. Its brackets of uncertainty have narrowed over time.1 They have done so in a culture and process that distinguishes established, rock-solid findings about the climate system from newest, best science that is just emerging and is still weighing competing explanations for their consistency with available evidence and predictive ability. Determining bedrock facts is mediated through a set of interactions of scientists, which are constructed so that they give appropriate weight to expert judgment.

Such judgment is built on the expert’s years of related research, evaluation of observational evidence, understanding of theory, deep knowledge of evolving research, interactions with colleagues and much more. That judgment is not just subjective, though it draws on subjective factors. But it draws most persuasively upon scientific argument, evidence, and what has been accepted earlier to support a finding.

New science, on the other hand, may be tentative, inchoate, or just plain incorrect. Though subject to peer review, that process by no means assures that the finding stands the tests of intellectual challenge and further research over time. Moreover, peer reviewers sometimes note that they are reviewing for logic and methodology and are not privy to the original data or do not check calculations, so new science may contain errors from the outset.

While these limitations apply to individual scientific research papers, the entire scientific community also may find that accumulated evidence does not eliminate uncertainty. The state of the knowledge may be insufficient to make a precise statement, in which case scientists express their uncertainty by bracketing their results in a range of possible values, in hopes that they later can say something with greater precision.

A classic example of this process involved estimates of “climate sensitivity.” For over 40 years the degree to which the climate is expected to warm with a doubling of the CO2 concentration in the atmosphere was reported to range between about 1.5 and 4.5 degrees Celsius of heating.2 Additional data and analysis accumulated over decades culminated in an important synthesis published in 2020. In it, a large international team of scientists narrowed the range substantially by combining many different lines of evidence.3 Drawing on this and other scientific literature, the IPCC Sixth Assessment Report (AR6) now provides a best estimate of the climate sensitivity as 3.0 degrees Celsius, within a reduced range of 2.5 to 4.0 degrees.

Uncertainty remains—there is still a range of warming possible from a doubling of atmospheric CO2, and statements about how the world will respond to such an increase must be understood to have uncertainty. But this refinement eliminated more modest temperature increases from the overall range of possibilities, increasing the likelihood of severe climate impacts from a given increase in atmospheric CO2.4 Note it also reduced the high-end limit, which underscores the self-correction that science values as well as its integrity.

Qualitatively, climate scientists have no doubt in what direction the curve of temperature is going, nor that such increases will lead to significant impacts. The good news, if you count it so, is that humanity still has some control of the outcome in principle, depending on whether and when we choose to stop emitting greenhouse gases.

Climate scientists often think about risk in terms of probability and consequences. Climate risks include highly likely impacts with important consequences, such as more extreme and frequent heat waves and their impacts on public health. They also include impacts with a lower likelihood of occurrence but very significant consequences if they were to occur, such as the additional feet of sea-level rise due to Antarctic ice sheet collapse. Risks of both types motivate action to address climate change. We will treat climate risk in greater depth in a separate module.

VII. How Accurate Are Climate Models?

Because there is only one Earth, it is not possible to run a controlled experiment on the entire Earth system of air, sea, land, ice, and life. Climate scientists cannot compare a control planet that might exist without increased greenhouse gases with the planet we actually inhabit. Nor can they systematically vary the amount of greenhouse gases emitted to measure the sensitivity of the Earth’s climate to greenhouse warming.

This is where climate models must stand in. While there is only one Earth, there are infinitely many possible alternative Earth systems and scenarios for greenhouse gas emissions that can be simulated with computers. These models of the Earth system provide crucial simulations that advance scientific understanding of climate change and its consequences. Because of the complexity of the Earth’s climate system and the impossibility of precisely predicting the future, they cannot fully resolve all uncertainties in that understanding. But they have become increasingly accurate over time.

The practice of mathematical modelling of physical systems on digital computers dates from the early days of nuclear physics and is now considered a well-established, essential tool in science. Climate models have simulated the climate system, first crudely and then more finely, for more than a half century. The earliest models were useful mostly to estimate the magnitude of Earth-system warming at the global scale. As digital computing grew in power, climate scientists refined their models to simulate smaller scales and account for more complex phenomena such as the melting of polar ice and local risks of heat waves, drought, heavy downpours, and even forest fires.

Representing the dynamics of the Earth system in a model requires at least three things: 1. physical understanding of the Earth system as represented by mathematical equations of its dynamics, 2. sufficiently accurate measurements of initial conditions, such as concentrations of greenhouse gases in the atmosphere or temperatures of the surface of the oceans, to plug into these equations, and 3. enough computing capacity to simulate, at a desired precision or resolution, scenarios that develop out of those initial conditions.

Climate models consist of mathematical equations that describe the processes of mass and energy transfer throughout the Earth system of atmosphere, land, ocean, and ice. They are based on accepted physical and biological science, but the accuracy of their results is dependent on factors with a range of uncertainties. These include whether the equations fully capture the processes being modelled (including those that operate at smaller spatial scales than the model operates), and how accurately climate observations determine the initial conditions of the parameters in those equations.

One way to test the reliability of a model is to run a “hindcast,” starting with values of the model parameters at a given time in the past and then running the model to compare its results to observations of what took place over the historical period. If the results conform well to what actually happened, the modeler gains confidence that the model is reflecting the system well and so can be relied upon to project a future state of the Earth system. Such tests show that climate models are highly successful in reproducing observed conditions.

Climate models have been criticized for oversimplifying the complexity of Earth-system interactions, but simplification might not be a drawback. Though born of finite computer power and limited empirical data, simple models allow for extraction of essential behavior of a system. But models must meet the standard of prediction, including accurate hindcasts and projections into the future that prove out over time. In early global circulation models, overall global averages were much more accurate, but detailed outputs at the regional level were much less accurate. As historian Spencer Weart has written, this played into the hand of skeptics who cast doubt on the fundamental validity of models.5

The most basic and important projection from a global policy point of view is the magnitude of global average temperature increase for a given increase in greenhouse gases in the atmosphere—the logical equivalent of “climate sensitivity.” With an accurate understanding of how much global temperature increase to expect from a given increase in atmospheric greenhouse gas concentration, policymakers can better understand the severity of climate risks at different levels of cumulative greenhouse gas emissions. Getting this right is crucial to setting targets for national and international action to reduce emissions.

Recognizing its importance, a multi-institution research team recently analyzed global mean temperature projections of 17 models developed between 1970 and 2007. They compared model predictions to observed temperatures over the period and found that 14 of the 17 models projected mean temperature to the present quite accurately. Two of the models got the fundamental relationship between emissions and temperature right more than 30 years ago.6 With more precise inputs of better measurements of temperature from Earth’s surface and satellites, more refined, higher resolution models in time and space have grown more accurate at simulating climate trends at the regional and local scales as well.

It has been said that models are neither theory nor experiment but a third category of scientific thinking.7 They do not measure the observable parameters of the climate and they do not provide an explanation of its mechanisms, but they make use of both. Their purpose is to mimic nature to be able to forecast its behavior. They are, in effect, that second “Earth” we do not have, except inside a computer. Their downside risk is to distance their users from specific, direct causal connections of the kind that, for example, explain unequivocally why a car collision causes damage. They are inherently probabilistic, representing unknown or uncertain contributing factors by probabilities and deriving from them distributions of possible outcomes rather than one determined outcome.

Their potential reward, however, is to fairly simulate a nature that is complex and itself not practically determinate. Averages of their results—like the global mean temperature—provide both projections of future outcomes and insight into the dynamics of the system. So, to the question of accuracy we might add how useful are models for understanding the climate system. Sometimes criticized as a weakness, their dependence on observed measurements to set initial conditions that are practically impossible to determine from first principles is really a strength. It is the mooring by which the model is tethered to the real world.

VIII. Issues of Reliability of Scientific Evidence: Zombie Theories, False Balance, Cherry-Picking

Even as models in the 1970s were emerging as a powerful tool for understanding the climate system, Earth scientists did not agree as to whether the Earth was warming or cooling. Early models led to both possibilities because of uncertainty in the relative strength of warming factors compared to cooling factors caused by human activities.8 Scientific work on climate change predominantly documented the warming effects of the greenhouse effect, but some work examined the forces that could lead to cooling.9 The popular press covered research on cooling as well, and in retrospect appeared to have overemphasized its likelihood, even after global warming had been accepted by most climate scientists.10

Politically polarized discussions used early speculation that the Earth might be cooling to fuel doubt of climate change. Some climate skeptics claimed that climate scientists were continuing to predict global cooling long after the scientific consensus on warming had formed. A documented hoax that invokes the hypothesis of global cooling continues to circulate on the Internet.11 It is a not-uncommon occurrence in science for such “zombie theories” to be perpetuated long after they have been disproved. (To be clear, we are not speaking of theories about zombies, which might be fun, but theories about real scientific questions that live on long after they have been answered.) What might once have been a legitimate scientific research question, like the possibility of global cooling, might be resurrected if keeping it alive serves some unscientific purpose.

Examples of zombie theories beyond climate change can be found in virology (vaccines cause autism), physics (nuclear fusion can be achieved at room temperature), and biochemistry (free radicals cause disease), among others. The reasons for their perpetuation vary and may include financial interest, political or ideological stance, or philosophical conviction, among others.12 Personal biases in individuals or research groups, or indeed even fraudulent data as in the case of autism, have been known to taint scientific research findings. In the next section, we will look at scientific institutions that exist to deter or ferret these out.

In law, deeply ingrained norms of fairness and strict procedure guide how courts consider an issue brought before them. In court, there is a formal process by which the claimant and the respondent each is allotted an equal opportunity to present its side of the issue and rebut the other side. As a rule, there are only two sides and the decision is for one or the other. Science works differently. Science certainly insists on a fair hearing of new ideas or new evidence. But within the bounds of fairness lies the possibility that a scientific argument is so far from valid that it is not deserving of consideration. As Justice Breyer recounted in the Reference Manual on Scientific Evidence, physicist Wolfgang Pauli once declared acerbically to a colleague who asked if a certain scientific paper was wrong, “the paper wasn’t good enough to be wrong.”13 Not every scientific argument deserves to be taken seriously.

One oft-cited example from climate science stands out. In the past, some scientists have acknowledged that the planet is getting warmer but doubted its human cause. Some of these have argued that the heating is coming from increased radiation from the sun. Indeed, the sun’s intensity, or irradiance, does vary over time, both on very long timescales and over shorter periods of about 11-years as sunspots wax and wane. It is also true that the Earth’s temperature varies in step with these solar variations. So, they argue, the global warming observed in recent decades can be attributed to intensification of solar irradiance.

But satellite measurements of total solar irradiance have long-since disproved this claim.14 While the sun’s intensity does vary, its variation does not explain the increase in atmospheric heating over the last decades because average irradiance has not increased in that period.15 Nevertheless, uninformed discussion of it has often received media coverage in the name of balance.16 There is no obligation to give equal time to a debunked claim in climate science—a point that mainstream media has begun to acknowledge in the last decade by labelling it “false balance.” Corollary to this, widespread reporting in the press about an alleged but incorrect scientific fact does not necessarily bestow credibility on it, as we have come to see recently in other arenas as well.

In addition, the phenomenon of “cherry picking” is a well-known pitfall in analyzing scientific data. Unconscious or not, bias in selecting which data to consider will invalidate a scientific analysis as surely as inaccurate measurements. Efforts to refute climate science have frequently involved instances of cherry picking, for many reasons—confounding of science and politics, the high social and political stakes involved, and the inherent uncertainty of many climate parameters that gives room for motivated reasoning (to name but three).

A much-discussed instance is worth noting here. In the early part of the 2010s, climate scientists wondered why there appeared to have been little or no increase in the global average temperature as measured by satellites between about 1998 and 2015. Critics who were skeptical of global warming took this apparent “pause” in global warming as proof that predictions of global warming were wrong. Expert climate scientists noted that the period in question was much shorter than the conventional 30 years for analyzing climate trends and, assuming that the total Earth system was continuing to warm, they argued that the oceans most likely were absorbing excess heat.

A graph of atmospheric temperatures over the 43 years since satellite data became available, from 1978 to 2021 (see Figure 9 below), demonstrates what expert scientists have since concluded—that there was no pause at all. Rather, 1998 was an extraordinarily hot year whose cause (an El Niño) is a recurrent geophysical phenomenon that is largely unrelated to climate change. Thus, the period considered began with a very high deviation above average temperature, whose graphical effect was to artificially level the trend line only during the period of apparent pause. In the years since 2015, heating has risen in a manner consistent with the long-term upward trend. The full graph over those four-plus decades reveals a clear upward trendline, with statistical fluctuations that are consistent with the average upward trend, within the bounds of uncertainty.

Graph showing satellite temperature data.

Figure 9. Graph showing satellite temperature data. Source: Ben Santer, Lawrence Livermore Nat’l Lab’y, NASEM Workshop on Evidence for the Courts: Emerging Issues in Climate Science 12 (2021).

The interval marked here with brown background became one of the most notorious charts in popular climate-science debate. Skeptical public figures and media commentators declared it showed there had been no significant global warming in the past 18 years, despite that experts had from the beginning stressed the El Niño anomaly. In retrospect, the idea of a pause in warming was a classic case of cherry-picked data leading to a false conclusion and playing into the hands of people motivated to dismiss climate change.

IX. Method: Scientific Discourse Through Institutions

So far, we have discussed mostly methods of climate scientists coming to conclusions through evidence and its interplay with theory. There is, however, another dynamic in the processes of establishing scientific facts and building consensus. It is that these facts are found and confirmed through social interaction of scientists within historically defined institutions and rules of discourse.

By “institutions,” we do not mean universities or research laboratories but the set of sanctioned practices and relationships within scientific culture. These include peer review, meetings of scientific societies and academies, deliberations of official scientific advisory groups and panels—and their products of assessment reports, proceedings, consensus reports, and journal articles.

Consider the practice of peer review. This is in many ways the founding institution of science. Its history begins with modern science itself nearly four centuries ago—though in a somewhat different form. At the outset of modern science, peer review by scientific equals was not required for the work to be taken seriously. But early peer review in the form of recognized scientific leaders overseeing and advancing the work of fellow researchers evolved to the rigorous system of scientific inquiry and reporting of the current day.1 And it helped to establish the present standard of written, generally anonymous, review by scientific peers prior to formal publication in journals.

In recent times, for new science to earn credence among scientists, it must be published in a peer-reviewed journal. A corollary to this standard is that if a researcher has no peer-reviewed publication record in a relevant field, that person is not usually recognized as expert. When judges look to qualify expert witnesses, one criterion is their record of peer-reviewed publications.

Further, in the specific field of climate science, the reliability of a potential witness, even one coming from a related field such as physics, must be assessed not by their standing in their own field or in science in general but by their expertise in the relevant area of climate science specifically, starting with peer-reviewed papers in that specialty. This standard is important especially when evaluating the reliability of scientific arguments that run counter to widely held consensus views of climate experts.

But peer review is not in itself sufficient to determine either that a scientist is expert or that a scientist’s results are valid. Peer review is a floor, with plenty of reviewed papers turning out to be wrong sooner or later. So, even if a result is published in a peer-reviewed journal, how can one tell if the result is valid? There is no litmus test for scientific validity, but we can increase our confidence if we know that the work has been scrutinized and withstood tests of criticism before, during, and after publication. Scientific ideas grow stronger and more accepted through delivery of papers at professional society meetings, scientific academies, and institutions of higher learning. In these forums, the scientist presents and defends the work with fellow experts who are equally or more knowledgeable (and professionally motivated to challenge it).

Indeed, scientists often emphasize the contentious nature of scientific discourse. As evolutionary biologist Stephen Pacala of Princeton remarked on the incentive system for science, “No one ever won a Nobel Prize by proving the status quo.”2 The process of science is not just bilaterally contentious, as in litigation, but multiply contentious, as each serious scientific proposition is subject to the creative criticism of every member of the scientific community. The incentive in academic science is to find flaws in the argument and to offer an alternative explanation that does a better job.

This is also why scientists are not just another group advocating for their own special interests. In the words of the late pioneering climate scientist and communicator Stephen H. Schneider, scientists are ethically bound to the scientific method, in practice promising to tell the truth, the whole truth, and nothing but.3

For all these reasons, to address a common charge against climate scientists, it seems unlikely that they are engaged as a group in a conspiracy to exaggerate the problem of climate change. Nor does it make sense that the underlying motivation for emphasizing the risks of climate change is to make money—for what far greater riches would befall the scientist who could assure society that burning fossil fuels has no deleterious effects. These are not likely to be factors biasing toward incorrect conclusions just because science has ingrained in it the skeptical mindset that leaves no proposition unchallenged, no inquiry closed if there is any possibility of overthrowing the standard idea.

There comes a point in scientific discourse when an idea has been vetted so much that the community thinks of it as established and begins to rely upon it. Recall the discussion of plate tectonics, which has so often been successful in explaining and predicting geological evidence that it is now considered the standard theory and settled science. No science is ever completely settled, but robustness over time and in widely varying applications gain for it both acceptance and the benefit of the doubt. Such is the example of Newton’s theory of gravitation, the fundamental idea of which has endured for over three hundred years. So too it now can be said that the first proposition of modern climate science—that the Earth system is warming from emissions of greenhouse gases—is as settled within climate science as any scientific idea can be.

X. Special Status of Consensus Reports and Assessments

Virtually all reputable climate scientists accept this view. How do we know? Because independent academic analyses have taken stock and documented the consensus.4 Virtually all reputable, expert scientific assessments for the nation and the world have warned of the consequences of continued fossil-fuel emissions. The U.S. National Academies, the Advisory Committee of the U.S. National Climate Assessment (NCA), the IPCC, and the American Association for the Advancement of Science (AAAS)—each separately has deliberated and reported its conclusions in the form of consensus reports and assessments.5

These groups and their products provide no guarantee of truth, but they work powerfully when taken together as guardrails toward upholding the primacy of evidence and the integrity of the effort to uncover how the world is warming and the consequences we now confront. When consulted repeatedly over many years, as they have been, and populated by a very wide range of scientific expertise and views, they reach something approaching that higher degree of certainty that only consilience of several independent lines of reasoning can support. By virtue of their independent deliberative processes, we gain reason for confidence in their conclusions.

Finally, to appreciate the reliability of their judgments, consider how consensus is reached in the processes of generating the milestone climate reports of the NCA and the IPCC. IPCC reports enlist scientists from around the world to provide a comprehensive summary of what is known about the causes of climate change, its impacts, and how adaptation and mitigation can reduce future climate risks. In the United States, the U.S. Global Change Research Program (USGCRP) has a legal mandate to conduct an NCA every four years, similarly enlisting scientists and government researchers to distill available scientific knowledge about climate trends and impacts to inform the policy and management communities as they consider climate-related risks in their decisionmaking. USGCRP also organizes sustained assessment activities that enable the integration of new knowledge as it emerges.6

IPCC reports follow a rigorous development process by which authors formally evaluate and develop findings that communicate the strength of scientific evidence and agreement on key topics. They differentiate between well-established understanding, emerging evidence, and the relative merits of alternative explanations where they exist. IPCC reports undergo multiple rounds of monitored scientific review by experts and governments to ensure a comprehensive assessment and clear basis for key findings of the assessment process.

Report development culminates in line-by-line approval by member governments of each report’s Summary for Policymakers in a plenary session with the scientific authors. During this approval session, scientists and government representatives work together to ensure that the key findings of each report are scientifically robust as well as maximally relevant and understandable to a policy audience. NCA reports follow an analogous development process and multistage review by experts, the general public, and government agencies, and are ultimately approved by the federal government.

Because of this rigorous assessment process, IPCC and NCA reports are more than “just another scientific report.” Their development builds joint ownership of current understanding and key findings by scientists and governments. These assessments also inform climate policy development at all scales of government—the IPCC Fifth Assessment Report, for example, informed the international climate change negotiations among states and subnational entities that resulted in the 2015 Paris Agreement to limit global warming to well below 2.0 and aspire to less than 1.5 degrees Celsius above preindustrial levels.7 The Sixth Report on the physical science of climate change deeply influenced the 2021 discourse in Glasgow to increase national ambitions to curtail emissions.

XI. Climate Change—the Virtually Unanimous View

Acknowledging that the field has moved fast over just a few decades, the body of expert climate scientists has developed a high degree of confidence in the validity of the foundations of climate science—within inevitable bounds of uncertainty. This is not only because the preponderance of evidence points in one direction, but also because alternative explanations are constantly being tested and found wanting. As Climate Risk Module author Yohe observes, you are not going to find a researcher walking into the room with a finding that unexpectedly overthrows the entire theory of global warming in a scientific revolution of the kind that Thomas Kuhn described. Persistent search for contradiction between theory and observation, pursuit of multiple lines of evidence, and a massive volunteer enterprise of painstaking collaboration to assess the state of understanding of the climate system have built a body of knowledge that is extremely well-founded.

The scientific community has near to unanimously embraced the only explanation that stands up to the evidence—that heat-trapping gases from human activities are heating the planet. This consequential idea is not a house of cards at risk of being knocked down. It is a giant jigsaw puzzle, whose picture becomes ever more complete as each piece falls into place. Though disagreements on details and even important elements persist, these differences are insignificant compared to the higher-order facts on which expert scientists agree: Global warming is real, human-caused, and a destabilizing force of massive consequence to humanity and the planet.

  • 1Melinda Baldwin, Peer Review, Encyclopedia of the Hist. of Sci. (Jan. 2020), https://lps.library.cmu.edu/ETHOS/article/38/galley/52/view/.
  • 2Pacala, Judicial Seminar, supra note 23.
  • 3See, e.g., Stephen H. Schneider, Stan. U., Understanding and Solving the Climate Change Problem (last updated 2021), https://stephenschneider.stanford.edu.
  • 4See William R.L. Anderegg et al., Expert Credibility in Climate Change, 107 PNAS 12107 (July 6, 2010); John Cook et al., Consensus on Consensus: A Synthesis of Consensus Estimates on Human-Caused Global Warming, 11 Env’t Res. Letters 1 (Apr. 13, 2016); Naomi Oreskes, Beyond the Ivory Tower: The Scientific Consensus on Climate Change, 306 Sci. 1686 (Dec. 2004).
  • 5See IPCC, AR6, supra note 14; Climate Change: Evidence & Causes, supra note 18; U.S. Global Change Rsch. Program, Fourth National Climate Assessment Vol. 1 (2017), https://science2017.globalchange.gov; U.S. Global Change Rsch. Program, Fourth National Climate Assessment Vol. 2 (2018), https://nca2018.globalchange.gov/;Am. Assoc. for the Advancement of Sci., What We Know: The Reality, Risks, and Response to Climate Change (2014).
  • 6World Meterological Org. Res. 1988/4 (1988); U.S. Global Change Rsch. Program, Legal Mandate (last visited Mar. 23, 2022), https://www.globalchange.gov/about/legal-mandate.
  • 7U.N. Framework Convention on Climate Change, Key Aspects of the Paris Agreement (last visited Mar. 14, 2022), https://unfccc.int/process-and-meetings/the-paris-agreement/the-paris-agreement/key-aspects-of-the-paris-agreement.
    Stay connected with the Climate Judiciary Project. This information allows us to better support education about climate science and the law and share new resources as they become available.

    Name Section

    I am a: (select one)
    This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.