Climate Science and Law For Judges

Part One: Scientific Foundations of Climate Change

Risks and Costs of Climate Change

Author
Wesleyan University
Published
 Cal Fire firefighter climbs a ladder by a burning structure while battling the Camp Fire in Paradise, California, U.S., November 9, 2018

This module offers some approaches and tools for thinking about how best to understand climate risk in the short and long term. It speaks to the economics underlying actions that reduce the likelihoods of climate-driven impacts either by reducing the emission of greenhouse gases (mitigation) or by reducing their associated damages (adaptation).

I. Introduction

As other modules in this curriculum have demonstrated, the evidence is now unequivocal that the planet has been warming since the industrial revolution1 and that greenhouse gas emissions from human activities are largely to blame.2 Scientists have widely accepted for decades that the specific risks posed by climate change continue to be characterized by substantial and cascading uncertainties.3 Each step in estimating risk down the cascade of uncertainty—from global emissions scenarios to regional climate effects and their possible impacts—involves its own uncertainties, which compound along the pathway from original cause to ultimate effect. 

Risk is, of course, a fundamental concept for decisionmaking of all kinds under uncertainty; it is the product of likelihood and consequence.4 Decisions in a resource-constrained world rely on assessments of relative risks over space and time, and they are made based on trade offs expressed in terms of likelihood estimates of what might happen if they do or do not take a particular action.

Cost-benefit analysis has been the traditional way of dealing with risk when evaluating humanity’s three choices for responding to climate change—abate (mitigate), adapt, or suffer. However, cost-benefit procedures can be ill-suited to handle the profound uncertainties of the climate problem and the elaborate connections across populations, sectors, and time.5 They typically examine the implications of individual policies almost in isolation and thus sometimes overlook how people might change their risk by responding privately to the stresses being analyzed. Many times, cost-benefit analyses favor inaction because they focus on uncertainties that may not be resolved before a policy decision has to be made. When there is a source of risk, of course, taking no action is as much of a decision as taking some action.

To accommodate these limitations of cost-benefit analyses, the Intergovernmental Panel on Climate Change (IPCC) adopted a new conceptual framework in its 2007 Synthesis Report: risk management.6 The IPCC stated that “responding to climate change involves an iterative risk management process that includes both mitigation and adaptation and takes into account climate change damages, co-benefits, sustainability, equity and attitudes toward risk.”7 These 30 words changed the way the world approached questions about how to respond to detected and projected climate damages.

Though cost-benefit analyses are still relevant, over the past 15 years of climate deliberations, risk management has become the complementing standard for bringing the science and economics of climate change to bear on evaluations of past, present,8 To be clear, risk management tools were designed explicitly to accommodate even enormous uncertainty. Cost-benefit analysis was not.

Risk management (even iterative risk management) has been adopted as the organizing principle by nearly every significant climate assessment since its publication: subsequent IPCC reports from Working Groups II and III through AR6, U.S. National Climate Assessments through NCA4 (and pending for NCA5), New York City Panel on Climate Change since 2010, New York State assessment of impacts and adaptation (pending as of 2022), and so on.

The U.S. National Academies of Sciences, Engineering, and Medicine has echoed the IPCC by accepting the working definition that:

the inherent complexities and uncertainties of climate change are best met by applying an iterative risk management framework and making efforts to significantly reduce greenhouse gas emissions; prepare for adapting to impacts; invest in scientific research, technology development, and information systems; and facilitate engagement between scientific and technical experts and the many types of stakeholders making America’s climate choices.9

Given that risk is a fundamental organizing concept behind understanding human and natural vulnerabilities to climate change, clearly communicating assessments of the components of risks over space, source, and time is essential. Decisionmakers need to be informed of trade offs expressed in likelihood estimates of the consequences of taking, or failing to take, a particular action. They also need to understand the potential consequences of maintaining the status quo.

This module describes the elements of iterative risk management in its modern context beginning with confidence language, coping with new knowledge, risk matrices, attitudes toward risk, and insurance. It then briefly addresses damages before turning to some of the details behind investing in mitigation and/or adaptation, where we explore the concept of “tolerable risk.” Finally, it returns to some of the more technical details of calculating the social cost of carbon before offering some conclusions.

II. Iterative Risk Management

As suggested in the introduction, it is many times difficult, if not impossible, to estimate accurately the likelihoods of some events associated with climate change. Calibrating these events’ consequences can be equally difficult. Still, looking through even a qualitative risk-based lens can be enormously useful for making, implementing, evaluating, and/or interpreting policy even in the most challenging of circumstances. This means considering the elements of risk, consequences and likelihoods, one at a time and then together.

A. Accounting Consistently for Uncertainties in Confidence Testimonies

The IPCC provided some guidance to its authors about handling the challenges of consistently accounting for uncertainties with regard to specific findings. The guidance proposes organizing one’s thoughts around two distinct criteria:

  1. Confidence in a particular finding, which can be assessed by evaluating the “type, quality and consistency” of the available evidence and data from which the finding or hypothesis was drawn.
  2. The “degree of agreement” in understanding what is driving a finding and describing rigorously the underlying causal processes behind those understandings.10

To take one example, there are lots of quality data demonstrating that gravity works, and there is widespread acceptance of the physics behind why it works. So, if you climb the Leaning Tower of Pisa and drop an apple, somebody can accurately project how long it will take for the apple to hit the ground, where it will land, and maybe even whether or not it will bounce.

To take another example, macroeconomic data collected daily, weekly, and monthly across the United States are extremely well respected, much like the gravity data gathered in the 500 years since Galileo’s famous experiment.11 The distributions that characterize the behavior of macro-scale metrics like inflation and unemployment are widely understood and accepted. Nonetheless, there are two conflicting perspectives about how the economy works. One is “neo-Keynesian,” wherein federal spending can sustain employment. The other is “monetarist,” wherein federal spending can only generate inflation. When you ask whether the post-COVID outbreak federal spending under the American Rescue Plan12 was a good idea, monetarists will say “No!” and neo-Keynesians will say “For sure!” Two years later, inflation hit a 50-year high and unemployment hit a 75-year low. So, either could be right, but only partly.

And so the quandary—How can we assess confidence in a particular scientific or economic finding, like whether higher federal spending will increase economic activity and increase employment? Figure 1 shows the IPCC approach by plotting limited, medium, or robust evidence against low, medium, or high agreement to establish the foundations for five different levels of confidence: very high, high, medium, low, and very low. These are judged relative to the confidence scale on the right-hand side of the figure. Exactly where the divisions lie is, of course, subjective. Within this matrix, movement toward higher agreement but lower levels of evidence can increase or decrease confidence in a finding depending upon the relative subjective weights of agreement and evidence. Moving toward lower agreement with greater evidence can similarly move confidence in either direction. However, moving simultaneously toward higher (lower) agreement and stronger (weaker) evidence means higher (lower) confidence, and stronger (weaker) evidence means higher (lower) confidence. And for our macroeconomic federal spending example . . . with robust evidence but low agreement (falling in the lower right-hand corner of Figure 1), it garners little more than medium confidence.

Confidence assessment chart

Figure 1. Confidence assessments depend on the quality of the underlying evidence and on agreement in understanding underlying processes that can be contingent on a wide range of possible futures; for example, a trajectory limiting temperature increases to 1.5 or 2.0 or 3.0 degrees Centigrade. Based on: Michael D. Mastrandrea et al., Intergovernmental Panel on Climate Change, Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties 3 (2010).

Why should findings with medium, low, or even very low confidence assessments be reported at all? Some climate-related events have a low likelihood of ever occurring but could generate enormous impacts. Considering events with low or very low confidence assessments enables decisionmakers and opinion-makers to account for these low-probability, high-impact possibilities. For example, there is a relatively low likelihood of a runaway greenhouse effect caused by the accelerated release of methane or carbon dioxide from melting tundra in the Arctic, but such an occurrence would have enormous effects across a wide range of measures. This is one of a small collection of large-scale singularities that the countries of the world would view as “dangerous anthropogenic interference with the climate system” per Article 2 of the United Nations Framework Convention on Climate Change.1

It is interesting to note, in passing, that this approach to low-confidence high-impact events is very similar to and uses the same language as the U.S. and United Nations intelligence communities in assessing and reporting its findings.2 There, analysts assess the credibility of the source of a finding given corroboration or inconsistencies in the underlying explanations. They also assess the quality of the underlying information, paying careful attention to whether it is direct evidence or circumstantial. Perhaps placing the summary information in both dimensions in a matrix like Figure 1, they offer confidence assessments from very high to very low—and they report very low findings if the consequences could be very high.3

B. The Use of New Knowledge in Iterative Risk Management

The concepts of confidence and iterative risk management can be especially helpful when thinking about climate responses and incorporating new science into an existing body of knowledge. What to do with new knowledge in a courtroom is a corollary question. Iteration of the risk management process is critical in such contexts when they are dominated by temporal uncertainty such that planning to make mid-course corrections based on new information is prudent—or when it is time to incorporate new science that may advance, reinforce, or undermine conventional wisdom with decades of acceptanceIteration only becomes more vital for considerations of climate science that carry unavoidable political and cultural baggage in the current environment.

Consider the earlier example of a runaway greenhouse effect caused by the accelerated release of methane or carbon dioxide from melting tundra in the Arctic. What can be said about the confidence of such an event occurring? What would happen to that confidence if new science seemed to undercut an earlier consensus? How should such news be communicated?

In 2007, the IPCC reported high agreement from land surface models that the extent of Arctic permafrost will decline during the 21st century, in large measure because of particularly rapid warming across the region. The next IPCC assessment in 2014 continued to voice concern about a positive feedback loop—that melting permafrost could release increasing amounts of greenhouse gases into the atmosphere and thereby accelerate the pace of future warming. In 2020, though, a team of Canadian scientists concluded from newly analyzed paleoclimatic data that the released frozen store of carbon might not be so sizable.4 In accordance with the IPCC confidence language protocol, this finding reduced confidence in the conclusion that the planet may experience a runaway methane emissions scenario.

This brought another question to the forefront: how should new science regarding the possible release of methane from permafrost be communicated to decisionmakers and influence-makers who have adopted a risk management approach? The answer to this question should begin with the observation that new science rarely contradicts current understanding completely. And just because a study’s impact conclusions are wildly different from conventional wisdom does not imply the value of giving the new study extra weight. In the vast majority of cases, a study with new conclusions simply means altering assessors’ subjective views about the confidence with which a former conclusion is held—both with respect to likelihood and consequence.

In this case, views of the consequences of a runaway greenhouse effect were not altered. Rather, the likelihood that future methane releases and associated damages would be small was increased. As a result, the assessment team could conclude that a more benign future was now a little more likely; however, they could also add that runaway warming from permafrost melting was still possible and that continued research into the matter would be prudent.

The news is not always this good. Recent science from Antarctica suggests two very troubling findings wherein it could be much worse. First, the pace of routine melting from the West Antarctic Ice Sheet has increased over the past decade or so. New sea-level rise estimates reflect that finding in their high-warming scenario with high confidence associated with an extra increase in global sea rise of about one foot through 2100. Secondly, more limited time series observations strongly suggest that the ice structures supporting the Thwaites glacier are disintegrating—and that they are disintegrating quickly and perhaps irreversibly. That potential raises the possibility of a second extra increase in sea-level rise of more than one foot within the next five years or so.5 The first finding is troubling, but the second could be catastrophic.

Sometimes, the story moves past physical impacts and into quantifiable monetary consequences. For example, Marshall Burke et al. (2015) stood alone in suggesting that damages from climate change through 2100 could be as high as 25% of global gross domestic product (GDP).6 This estimate has been contested and even its distribution is controversial, but it should have entered the iterative risk management equation. In context, 2100 is well after the first or second iteration this century. If the Burke study is right about 2100, then adjusting in 2050 and 2075 would be more expensive than getting it right this year. However, potential “mid-course corrections” are a responsible hedge so that the question is “What should we monitor to get an early warning?” That is, the critical factor in iterative risk management, ensuring that there is agreement and understanding about why any disagreement exists.7

C. Risk Matrices

To accommodate situations like the last example, the context of the potential costs of climate change has come to reinforce the conventional definition that risk is the product of the likelihood of an uncertain event actually occurring multiplied by the consequences of that event.

In response to the critical need to reflect this definition, risk matrices have emerged as one of the most effective tools by which analysts, scientists, policymakers, opinion-makers, judges, and others can communicate clearly with one another.8 Figure 2 shows how a matrix can be used to project current and future risks in terms of likelihood and consequence.

Chart showing potential costs of climate change.

Figure 2. The conceptualization of risk as the product of likelihood and consequence can be portrayed by a two-dimensional matrix. Source: Gary Yohe, On the Value of Conducting and Communicating Counterfactual Exercise: Lessons From Epidemiology and Climate Science, in Environmental Issues and Sustainable Development (2021).

Following the protocol of the U.S. intelligence community, calibrations of likelihood that run up the vertical dimension span seven categories ranging from “Virtually Impossible” to “Virtually Certain,” with probabilities close but not equal to zero and close but not equal to one, respectively. The rationale for these seven categories, once again drawn from the intelligence community, is displayed in Table 1. The table replicates intelligence community protocol and supports recommending that climate analyses categorize likelihoods of climate risks according to seven categories ranging from “Minuscule” to “Severe.”

Categorizations of consequence also have seven possibilities in Figure 2—but that is mostly to cement some consistency of thought from impacts calibrated in whatever metric makes the most sense for the case at hand.

Green boxes identify low-risk combinations of likelihood and consequence; they are relatively benign and unlikely, so they are of smaller concern. Yellow boxes suggest moderate concern, while the orange boxes capture combinations that fall just short of the red-shaded combinations of major concern.

Such a matrix can be used to identify how risks adjust as the climate actually changes and outcomes become more certain or impacts more severe. The beginning of the solid arrow indicates a particular risk under the current climate. Decisionmakers can envision how this location could move across the matrix as time progresses and climate change occurs. The solid arrow shows, for example, how a particular risk could move quickly into the orange region and closer to the troubling red combinations for which some reactive or proactive preventative action might be appropriate.

Table showing calibration of the subjective likelihood that an event might occurs.

Table 1. Calibration of the subjective likelihood that an event might occurs. Source: Cent. Intel. Agency, Intelligence Community Directive 203 3 (2015).

Figure 2 also suggests how risk matrices could be used to account for uncertainty about the future. The upper dotted line represents a hypothetical 95th percentile scenario that portends larger consequences with growing likelihood sooner than the baseline. The lower dotted line represents a 5th percentile scenario trajectory. It is shorter because it tracks below the median and thereby depicts cases where the consequences of climate change increase more slowly.

Risk analyses must sometimes be conducted despite the lack of sufficient data to support quantitative estimates of the distributions of relative likelihoods across a range of possible future climate-related impacts and associated economic events. In those cases, a risk matrix can provide a suitable qualitative substitute for specific confidence assessments.

D. Attitudes Toward Tolerable Risk

The term “risk” has many definitions in colloquial use, such as the possibility of a loss or injury, a person or thing that is a specific hazard to an insurer, or the chance that an investment will lose value. But the more rigorous definition of risk as “the product of likelihood and consequence” has specific implications for climate change science. First, “tolerable risk” can be seen as the levels of risk deemed acceptable by a society or by an individual to sustain some particular benefit or level of functionality. Achieving tolerable risk does not mean eliminating all chance that harm will occur. It does not even mean that the damage from an event, such as a storm or wildfire, will not be catastrophic to some people. It does mean that the risk has been evaluated and is being managed to an acceptable level of comfort according to the particular risk aversion applied.

Public health provides an example from the United States. Figure 3 shows mortality proportions by state from ordinary flu and pneumonia.1 The categories illustrated by color indicate different levels of tolerable risk with which each state’s citizens have become comfortable. How do we know they find this risk tolerable? Because those citizens are not demanding greater safety even though the means to reduce flu mortality are available.

Map of reported deaths per 100,000 people across 50 states.

Figure 3. Reported deaths per 100,000 people across 50 states. Source: Kaiser Family Foundation, Influenza and Pneumonia Deaths Per 100,000 Population (2020) (based on data from the Centers for Disease Control and Prevention).

More closely relevant to climate, the New York (City) Panel on Climate Change (NPCC 2010) employed tolerable risk to frame both its evaluation and management of climate change risks to public and private infrastructure.1 NPCC communicated this concept to planners and decisionmakers by pointing out that building codes imposed across the city did not try to guarantee that a building would never fall down. Instead, they were designed to produce an environment in which the likelihood of a collapse was acceptable given the cost and feasibility of doing more to prevent collapse. However, if climate change or another stressor pushed a particular risk profile closer to the thresholds of social tolerability by increasing likelihoods of harm, investments in risk-reducing adaptations would be expected to increase.

Achieving broad acceptance for any tolerable risk threshold across a population is a huge task. For one, risk tolerance varies widely across societies and individuals. For another, policymakers confronting a pandemic or extreme climate change are often navigating between different, and perhaps strongly contradictory, risk management priorities.

Individuals, communities, and institutions can meet such challenges by agreeing on a common set of facts about the likelihood and possible consequences of an event. They can then explore together how much risk is acceptable and why. During the second wave of the COVID-19 pandemic, for example, New York State relied on science to frame its strategies to avoid overwhelming its hospital system after what had been a relatively successful initial response. New York implemented two forward-looking criteria: (1) hold the transmission rate of the virus below 1.0 and (2) keep vacancies of hospital beds and ICU beds across the state above 30% of total bed capacity. These criteria represented thresholds of tolerable risk that could be monitored and projected into the future using results from integrated epidemiological-economic models.

E. Insuring Against Risks

One way of managing risks is through insurance. Sometimes, insurance comes in the form of hedging against a bad event by being careful to lower the likelihood of an event. Other times, insurance means taking actions to mitigate the consequences of an event. In either case, insurance is a form of investment in which money spent now yields future benefits.

More formally, insurance is the product of a formal business transaction between an individual customer who is, to some degree, averse to risk and a regulated industry that sells policies that protect against financial harm at a premium (price per dollar of coverage). These financial arrangements get quite complicated. They depend, for example, on differences in customers’ risk-accepting or risk-reducing behaviors, on the one hand, and the sorts of incentives, services, or deductibles that companies might use to compete for clients, on the other. These complications can be critical considerations in using insurance to cope with the risks of climate change.

The general principles for optimizing insurance are simple. Risk-averse people are willing to pay up to a maximum acceptable risk premium to eliminate uncertainty about the future. In this optimal world, risk premiums are by definition the maximum amount that individuals would be willing to pay to avoid completely a quantifiable risk situation. Because most jurisdictions require them to be actuarially fair, insurance premiums become the smallest amount that insurance companies would be willing to accept to guarantee that outcome. This situation can also be characterized by insurance companies charging a premium per dollar of coverage that is equal to the probability that the customer will suffer a loss and file a claim.

Insurance companies are regulated to be actuarially fair (on average) in their coverage of risk categories—i.e., the risks facing collections of actuarially similar people. Regulation prevents insurance companies from pricing their products to extract the maximum available risk premiums from their potential clients. Some argue that the global scope of climate risk is so enormous that this observation does not apply. That assumption depends upon analysts and opinion-makers ignoring the existence of reinsurance, an industry which diversifies insurance claims around the world with respect to climate and socio-political-economic environments.

So, how do insurance companies make any money? They invest the premiums that they collect at the beginning of the coverage period while they wait to pay whatever portion of a qualified loss they are legally required to cover.

This model can be used to understand how insurance can minimize financial harm caused by climate change. Consider, for example, coastal residents’ insurance-based responses to increased flooding risks caused by human-induced sea-level rise. Suppose that the residents, the demand side of a flood insurance market, think that the premium offered by the insurer and certified by a regulator is too high—that is, they think that the quoted premium is greater than any credible estimate of the probability of loss from flooding. Perhaps their subjective confidence in climate science or in the connection between flooding impacts and socioeconomic consequences is low. Or perhaps they have grown accustomed to premiums that were calculated on the basis of historical losses and not on the basis of projected future losses—losses that science now projects to become larger and more frequent. They may also think that they are entitled to the subsidized system in which the Federal Emergency Management Agency (FEMA) covers any and all extreme damages at a minimal participation fee under the National Flood Insurance Program (NFIP).2

In some cases, these subjective or cognitive aspects have the most explanatory power, especially since they can be actively manipulated by industry, politicians, and climate deniers. Insurance companies have the responsibility to convince individuals otherwise, but that can be an uphill battle. Starting with a neutral, economics-based discussion intended to explain baseline principles of rational market behavior makes sense in setting an efficiency benchmark for assessing how effectively insurance is limiting financial harm from climate change. But surely perceptions of climate risk—beyond general risk tolerance or risk aversion—are strong components of the problem of underinsurance.

The practical point, here, is essential. Perceptions of climate risk (beyond general risk tolerance or risk aversion) are a component of the problem. They have been documented by the social sciences, and some judges will be familiar with the concept from fraud cases that they have heard. Relying on past data is not necessarily fraud, except when it is clear (or should have been clear) with regard to distributions of specific impacts that the past is neither a credible characterization of the present nor demonstrably a harbinger of the future. People who claim otherwise are distorting the truth. For the climate, the past is no longer a prologue of the future.

For any or all of these reasons, a resident could underinsure, meaning that those residents would not be paying at a rate that will be sufficient to cover actual future damage. This leaves damage to repair and no source of funds to pay for it. The question arises of who should bear the cost.

Since, in conventional economics terms, rational individuals would take advantage of properly priced insurance opportunities in their own best interest, society should not pay for losses that could have been covered by individuals. This general observation can be applied beyond the specific context of insurance to climate change as well.

Applying this insurance perspective to estimating the total economic costs climate change imposes on people, communities, or other economic actors for whom insurance coverage or other risk-spreading mechanisms are available has several immediate implications. First, for calculations of personal financial harm, residual damages above the optimal coverage should be included, as well as any additional insurance costs required to reach that optimum. Costs that could have been avoided should not be included. For example, deductibles should be excluded because they reflect harms against which the injured party was willing to self-insure. Secondly, when estimating the economic costs of climate risks to society, insurance costs and residual damages should be included but, again, not deductibles.

Of course, insurance companies need information about the likelihoods and consequences of climate-related events if they are to write policies to cover potential damages. In some cases, insurance coverage is not feasible because the risks are not known or, if they are known, have not been made public by the experts who have investigated them. In the latter case, exemplified by the behavior of many oil and gas companies, residual damages are in fact total damages.

Recent experiences in the United States have underscored this point.

The 2020 hurricanes and California wildfires illustrate how extreme events are increasing in intensity and frequency and can combine to amplify one another in specific places. In California, out of the state’s largest 20 fires in acres burned, only three occurred prior to 2000, and nine of the biggest 10 began after 2012. Indeed, 9,270 fires burned a record 1.5 million acres in 2017. The next year, the Mendocino Complex fire became the then-largest wildfire in California history. Historic drought and unprecedented heat marked 2022, even despite never-before-seen rain and associated flooding throughout the state.3

The decade finished with a less noteworthy year in 2019, but then came 2020. The Complex fire became the new largest fire in California history in August 2020. Soon thereafter came the third, fourth, fifth, and sixth largest wildfires in the state’s history. By October 3, 2020, these five conflagrations and nearly 8,000 other more “ordinary” wildfires had killed 31 people and burned more than four million acres. Incredibly, on that day, all five of those major fires were still burning.4

Human actions are, of course, the major factor that has increased fire risk. On the consequences side of the risk calculation, catastrophic damage to life and property has increased markedly as more people have moved into vulnerable forested areas, putting their lives and property at risk and setting more inadvertent blazes. Changes in forest management have also contributed because fire suppression policies reduced the frequency of blazes that could burn off fuel reserves built up in forests. However, these non-climate contributors to increased fire danger have not increased sufficiently to fully account for the recent devastation.

The change in the various individual factors that create wildfire threats cannot explain the devastation if taken one at a time. A record number of dry lightning strikes caused many of the 2020 fires. This lightning was not solely the result of climate change, but it fed into a witches’ brew of conditions that are all linked to global warming. The lightning strikes and other points of ignition hit in the midst of a record drought and heat wave that had lasted for weeks on end. Decades of gradual warming had extended the western fire season by some 75 days and increased springtime bark-beetle populations, and years of beetle infestations had produced large stands of dead trees. Taken together, these contemporaneous influences reveal that the issue is not just what sparks the fires. The larger problem is the context in which they start and how quickly they spread once started.

A similar story can be told about damage from tropical storms. Hurricanes Harvey in 2017 and Florence in 2018 dropped historic amounts of rain after making landfall and then stalling over Houston and North Carolina, respectively. In the summer of 2020, Hurricanes Laura and Beta followed suit, causing extreme rainfall totals and substantial damage from storm surges. Their behaviors mimicked Dorian over the Bahamas in 2019. Finally, in 2017, Hurricane Mike traveled more than 100 miles inland from landfall along the Florida panhandle only to stall over Albany, Georgia, long enough to deposit in excess of five feet of rain in some locations and around four feet across half of the state.5

Near-record warm ocean and gulf temperatures have allowed more tropical depressions and non-tropical low pressure systems to develop into dangerous hurricanes. At the same time, the decrease in the summer temperature difference between the Arctic and the tropics has weakened steering currents in the atmosphere, causing storms to move more slowly. In addition, sea-level rise, one of the most obvious results of decades of rising temperatures, has compounded risks posed by storm surges.

The expanding consequences of compound fire and flood events are also having negative effects. Many of the worst fires and hurricanes have exploded so quickly and have spread so erratically that human evacuations have become “moment’s notice” emergencies. People across the United States—from the Southeast and Gulf Coasts to California and Oregon—have been forced to retreat from harm’s way as quickly as possible.

New information from the ongoing scientific process can open new doors of inquiry, to be sure. More usually, and as noted above, new evidence hardly ever suggests either dismissing conventional wisdom entirely or reversing its content. In any case, it is critical that some protocol like what has become standard in IPCC assessments be followed in bringing the new science into existing assessments. Only then will new reports affecting the confidence with which a piece of conventional wisdom is held have credibility.

III. The Damages From Climate Change

Creating and updating damage estimates for the ranges of possible future temperatures requires an understanding of the roles played by uncertainties. The IPCC has created potential pathways for greenhouse gas emissions to compare scenarios of how human society might develop socially, politically, and economically in the future. The left panel of Figure 4 displays three of these scenarios, which are known as representative concentration pathways (RCPs). The right panel of Figure 4 displays estimates of direct damages, calibrated in percentage of U.S. GDP, from the global mean temperature associated with each of these scenarios.6

The highest emissions scenario (RCP8.5), which projects emissions that will have failed to reach their maximum by 2100 even at nearly triple 2020 levels, supports damage estimates ranging from just above 2% of GDP all the way up to more than 10%. The middle scenario (RCP4.5), which has emissions peaking around 2045 and stabilizing around 2080, supports damage estimates ranging from 0.5% of GDP to more than 2% in 2100. Emissions along the aspirational RCP2.6 scenario, which peak almost immediately and certainly before 2030, show damage estimates staying below 1% of GDP throughout the century.

While Figure 4 provides a summary portrait of projected damages for the United States along alternative global socioeconomic scenarios, it should be noted that new damage projections like this are always emerging. For example, Franziska Piontek et al. provide a synthesized discussion of the connections between global emissions pathways that support calculating the economic damages of biophysical impacts such that equity concerns and adaptation programs can be explored.7 The WGII report from the recent IPCC AR6 also summarizes this literature; it is the product of a process that has produced a series of assessments of damages across North America (in one chapter) and around the world (across all of the regional chapters). The process also produces Synthesis Reports that describe findings in language designed for decisionmakers, heads of state, negotiators, and their staff.8

Two charts: Global Carbon Emissions and Direct Damage to U.S. Economy.

Figure 4. Estimates of aggregate direct economic damages for the United States increase with projected emissions. Source: U.S. Global Change Research Program (USGCRP), Fourth National Climate Assessment Vol. II: Impacts, Risks, and Adaptation in the United States (2018) (Figure 29.3).

However, it should also be noted that these wide-ranging damage estimates have many shortcomings. First, they are not comprehensive measures of damage because there are no reasonable estimates of economic damages for many ecological, physical, and economic factors known to be vulnerable to changes in climate. Second, they do not reflect possible adaptation efforts that would reduce damages by more than they cost. Third, they raise several controversial issues regarding their national or global coverage.

For example, many have argued that reporting the value of statistical life (VSL)—the amount that people would be willing to pay to reduce the likelihood of death from a specific activity measured in terms of the discounted value of lifetime earnings prospects—is not only inappropriate but also offensive and unethical when applied around the globe.1 While the complicated methodology of computing and interpreting lifetime earning potential can be applied comparatively within a country, it cannot adequately account for the diversity of national economies. Comparisons using exchange rates or purchasing power can lead to enormous anomalies like estimating Americans’ VSL at something over $1 million but less than $1,000 for those living and working in southern Asia. Finally, aggregate economic estimates are highly sensitive to the long lists of assumptions about discounting, the particulars of the climate system, and the wide range of human behavior that are built into the various RCP scenarios.

This issue of coverage and the applicability of economic metrics of damages in a broader context is larger than just the VSL. It includes five “Reasons for Concern” (RFCs) that include: risks to unique and threatened species including human settlements calibrated in morbidity and mortality (RFC1); multidimensional consequences of extreme events (RFC2); aggregate global (RFC3) and regional (RFC4) metrics impacts, which are no longer exclusively calibrated in terms of currency; and sudden and massive changes in climate trajectories (RFC5).

To this point, Figure 5 and Table 2 display the latest (as of summer 2022) version of the iconic IPCC-created burning embers representations and the underlying sources of key vulnerabilities. They are replicas of Figure 1 and Table 1 from Brian O’Neill et al. (2017) in the Proceedings of the National Academy of Sciences (PNAS),2 and they are the direct descendants of similar graphics in the contribution of Working Group II to IPCC AR6.

Figure 5 includes the caption of the PNAS figure to assist in understanding content. Since the figure depends on the table, the easiest way to read content is backwards. Find a key vulnerability of interest, look at the correlation part of the table to see which RFCs are in play, go to the column to see when the assessed concern is threatening, and then read the icons to attribute concern to a sector or region.

IV. Mitigation and Adaptation as Investments

Mitigation and adaptation are the two major categories of response to climate change risks. Thinking in terms of risk, investment in mitigation is designed to reduce the likelihood of bad events occurring. To be effective, mitigation must be global in nature because the impact of each molecule of greenhouse gas is independent of the location of its source; and that fact brings the tragedy of the commons problem directly into play. It is generally accepted that humanity tends to underinvest in mitigation as a result.

Put another way, investment in mitigation is to climate change what vaccination is to a global coronavirus pandemic. Both reduce the flow of harmful economic or health effects along a causal chain. Both address harms that have been labeled by some as existential threats to humanity. The label fits, but only in the probabilistic sense that either threat can kill a human being anywhere around the globe; we don’t know who or where, but we know it will happen, sometimes before you finish reading this sentence. And the goals of both tend to be characterized as reducing the chance of crossing the threshold of what society holds to be a level of intolerable risk, as discussed above. The issue for economics, therefore, is not to weigh benefits against cost to characterize an optimal investment that will be impossible to achieve; it is to minimize the global cost of achieving a globally acceptable societal objective even if its foundation is only qualitatively determined.

  • 1Thomas J. Kniesner & W. Kip Viscusi, The Value of a Statistical Life, in Oxford Rsch. Encyclopedia of Econ. & Fin. (Jonathan H. Hamilton ed., 2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3379967.
  • 2Brian C. O’Neill et al., IPCC Reasons for Concern Regarding Climate Change Risks, 7 Nature Climate Change 28 (2017).
Enhanced burning embers diagram.

Figure 5. Synthetic representation of five Reasons for Concern based on eight key vulnerabilities across 12 sectors and categories of risk. Source: Joseph E. Aldy et al., Keep Climate Policy Focused on the Social Cost of Carbon, 373 Science 850 (2021), https://www.science.org/doi/abs/10.1126/science.abi7813-->.

Table of key sources of vulnerability.

Table 2. Key sources of vulnerability as identified by IPCC AR6 authors and their association with RFCs. Source: Joseph E. Aldy et al., Keep Climate Policy Focused on the Social Cost of Carbon, 373 Science 850 (2021), https://www.science.org/doi/abs/10.1126/science.abi7813-->.

In contrast, investment in adaptation is designed to diminish the damages associated with a bad event. Climate impacts are generally local, or at most regional, so it is easier to quantify estimates of both costs and benefits (damages avoided). Put another way, investment in adaptation to climate impacts is what new therapeutics are to a global pandemic. Both shift the distributions of harm in a risk matrix down and to the left by reducing the harmful economic or health effects of an original cause.

Both mitigation and adaptation can be thought of as investments in insurance in the sense that present expenditures produce only uncertain future benefits (that are sometimes widely and other times inequitably distributed). When societies, governments, and individuals invest to mitigate climate emissions or adapt to climate impacts, they incur up-front costs to enjoy projected benefits along an uncertain time frame.

Both mitigation and adaptation need to be informed by the likelihood of projected trends. Mitigation options should be informed by confidence in the attribution of damaging events to human sources and by confidence in the understanding of the causal chain on which projections are based. The point of mitigation policy is to achieve a targeted temperature limit whose associated climate impacts produce consequences that do not greatly surpass tolerable levels of risk. The lower the temperature target, the more urgent the investments in mitigation.

Some but not all adaptation can be driven by observed local climate changes, regardless of their ultimate source. In fact, responsive adaptation can sometimes be designed effectively with little more than strong detection of past events, sound understanding of local trends, and the political will to act. Long-term investments in proactive adaptation, however, must be informed by assessed confidence in projected climate change scenarios. Proactive adaptation must also be informed by projections of how future risks will change over time in response to global mitigation efforts of uncertain efficacy.

Accounting for investment in adaptation of any type is straightforward but important. It is nearly always the case that an adaptive response cannot be 100% effective against all possible potentially damaging future states. Perfect effectiveness would simply cost too much.

Investments in adaptation can take many forms. Changes in behavior and social practices, construction of protective infrastructure, and moving out of harm’s way all come to mind. So does insurance. As noted previously, when individuals are not convinced that the quoted premiums for coverage accurately portray the relative likelihood of an insurable loss, they reveal their attitudes toward risk with the deductibles they select. Deductibles are subtracted from claims up to the total amount of coverage, so they represent potential losses against which individuals are willing to self-insure. That is, deductibles represent what might be termed the individual level of tolerable risk—the product of subjectively perceived “likelihood” times “consequence.”

Figure 6 displays some of the foundations of climate science that determine the constraints and opportunities in designing mitigation and adaptation actions.1 The top panel contrasts a business-as-usual emissions trajectory through the end of the century with a trajectory that caps annual emissions between 15 and 20 gigatons of carbon starting in 2050 and another that produces an 80% decline in emissions during the second half of the century. The lower panel translates these three emissions trajectories into corresponding CO2 concentrations.

It is very important to understand that Figure 6 also reveals that stabilizing emissions at any level does not stabilize the concentration of CO2 in the atmosphere. That is because concentrations depend on cumulative emissions and CO2’s persistence over time. Put another way, net annual emissions that exceed zero continue to contribute to growing COconcentrations and rising temperatures, which is a challenge if the goal is to limit temperature rise. However, net emissions could fall to zero if the remaining emissions are balanced by removals of CO2 from the atmosphere, as discussed in the Solutions module.

Recall that Figure 5 has already presented Reasons for Concern in isolation. Now, Figure 7 places RFCs into the context of future emissions to illustrate harms that could be avoided by stabilizing temperatures below particular levels along specified but artificial socioeconomic scenarios. It places, more specifically, temperature limits achieved along ranges of possible futures against levels of concern for the five metrics.

Line graph showing Three emissions trajectories for carbon over the rest of the 21st century.

Figure 6. Three emissions trajectories for carbon over the rest of the 21st century produce corresponding concentration pathways of carbon dioxide in the atmosphere. Source: Nat’l Acads. Sci., Eng’g & Med., Climate Stabilization Targets (2010) (Figure Syn.4).

Two charts: burning embers and bench-markings for alternative temperature targets with effective mitigation.

Figure 7. The right-hand image shows the most recent version of the “burning embers” representation of five Reasons for Concern (Figure 5) derived from eight key vulnerabilities (Table 2) tracked through 12 different sectors or critical contexts. The left-hand image elaborates uncertainty bounds around the left-hand side of Figure 4. The left axis provides bench-markings for alternative temperature targets with effective mitigation. The ranges in the figure show potential futures for policies whose efficacy is uncertain. Source: IPCC, Climate Change 2022: Impacts, Summary for Policymakers 16 (2022), https://www.ipcc.ch/report/ar6/wg2/downloads/report/IPCC_AR6_WGII_SummaryForPolicymakers.pdf-->.

Many of the mitigation policy discussions speak to limiting temperate rise to 1.5oC to 2.0oC by 2100, though we have already exceeded 1.0oC by the third decade of the 21st century.

Comparisons of the 1.5°C and 2.0°C temperature targets were the topics of an official IPCC response to an invitation “to provide a Special Report in 2018 on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways.”1 That report concluded, with high confidence, that climate-related risks for natural and human systems will increase at up to 1.5°C above pre-industrial temperatures but will still be lower than at 2°C of warming. It also noted, with high confidence, that the risks will depend on the magnitude and rate of warming, geographic location, levels of development and vulnerability, and the choices and implementation of adaptation and mitigation options. It said, with medium confidence, that risks to global aggregated economic growth due to climate change impacts are projected to be lower at 1.5°C than at 2°C by the end of this century.

Investments in mitigation and adaptation frequently compete for the same scarce financial resources. It is important, therefore, to recognize that these two alternatives are often complements; investments in one frequently benefit the other. Table 2 demonstrates this point with a matrix whose entries always complete sentences that begin with “Advancing ‘this’ . . . will strengthen ‘that’ . . . .” For example, Advancing adaptation will strengthen limiting because “Any given degree of climate change may be associated with less severe impacts and disruptions of human and natural systems.” One entry is particularly hopeful: Advancing informing (efforts) will strengthen advancing science and technology because “Science may be more attuned to decision needs, and public support for advances in science is likely to increase.”

The Complementarities of Mitigation Table

Table 3. The Complementarities of Mitigation (“Limiting” in the vernacular of the National Academy), Adaptation, Advancing Science and Technology, and Informing Decision Makers. Source: Nat’l Academies of Sci., Eng’g & Medicine, America’s Climate Choices 75 (2011), https://www.nationalacademies.org/our-work/americas-climate-choices-->.

V. Social Cost of Carbon

Current common practice uses one particular aggregate estimate of economic damage—the social cost of carbon dioxide (SC-CO2). This estimate is defined as the value of the damage resulting from the emission of one more ton of CO2. It also can be interpreted as the damages that can be avoided by removing one ton of CO2 from the emission stream (that is, focusing on the benefits rather than the costs). The SC-CO2 is the most widely used way to summarize aggregate estimates of the economic consequences from climate change impacts.1 There are analogous estimates of the social cost of methane (SC-CH4) and the social cost of nitrous oxides (SC-NOx), since they are also greenhouse gases that contribute to warming along different timescales.

Climate models incorporate representations of numerous natural and physical systems. Other assumptions include normative and behavioral parameters such as personal or social consumption impatience, attitudes toward risk, and consumption sensitivities to income and relative prices. Estimates of the SC-CO2 are thus contingent on a long list of parameters and assumptions.

Figure 8 shows that estimates of any of these social costs are contingent on choices about discounting and risk aversion. They are also determined by other sets of assumptions from their underlying climate and economic modeling: policy interventions to reduce emissions (or not) and/or to ameliorate damages (or not), population growth profiles, underlying technological changes, international trading structures, the availability of economic resources, and so on.

Importantly, social cost estimates of anything are also specific to time. Because the calculation is model-based, it is possible to make estimates for future dates. These estimates increase over time, not because modeling sensitivities or parameter choices have changed, but rather because increasing damages projected into the future for any specific modeling configuration occur closer to the benchmark date of the estimate and its discounting function.

The time sensitivity of the estimates independent of discounting was displayed forcefully by the U.S. Environmental Protection Agency (EPA) in 2016 when it reported mean estimates for SC-CO2 in five-year increments from 2015 through 2050 for three alternative discount rates—2.5%, 3.0%, and 5.0%.2 The average SC-CO2 ranged from $11 to $56 in 2015 and from $26 to $95 in 2050.

Since the discount rate and geographic coverage are arbitrary, it is possible to manipulate social cost estimates for political gain. In 2020, for example, the U.S. General Accounting Office reported that EPA, in response to a 2017 Executive Order, had begun to use higher discount rates of 3% and 7% (as opposed to 2.5, 3, and 5%) in calculating SC-CO2.3 EPA also had changed the calculation of the impacts to include only U.S. damages rather than considering the worldwide economic consequences. Their new modeling produced dramatically lower costs: $6 for 2020 up to only $9 for 2050. Making these two discretionary changes therefore reduced the marginal social cost of CO2 emissions to nearly zero—not because the economics required it, but because key discretionary assumptions were altered.

  • 1The social cost of carbon (SCC) is sometimes cited as a perfectly analogous aggregate that is equal to 3.67 times the equivalent SC-CO2 value (because the ratio of the atomic weight of CO2 to the atomic weight of carbon is equal to 44/12 or 3.67). Care must be taken to recognize the units when any calculation is undertaken since the error of misreporting can be a multiple of almost 4.
  • 2Interagency Working Grp. on Soc. Cost of Greenhouse Gases, Technical Update of the Social Cost of Carbon for Regulatory Impact Analysis—Under Executive Order 12866 (2016), https://19january2017snapshot.epa.gov/sites/production/files/2016-12/documents/sc_co2_tsd_august_2016.pdf.
  • 3U.S. Gov’t Accountability Off., Social Cost of Carbon, GAO-20-054 17 (2020), https://www.gao.gov/assets/gao-20-254.pdf.
Bar chart showing distributions of estimates of SC-CO2.

Figure 8. Distributions of estimates of SC-CO2 contingent on the discount rate with the justification of a high-rate associate with high aversion to risk. Source: Nat’l Acads. of Sci., Eng’g & Med., Valuing Climate Damages: Updating Estimation of the Social Cost of Carbon 13 (2017), https://nap.nationalacademies.org/resource/24651/dbasse_176580.pdf-->.

The climate science also changes over time, and social cost estimates need to keep up. For example, a paper published in Nature in the late summer of 2022 reported in their abstract that their “preferred mean SC-CO2 estimate is $185 per tonne of CO2 ($44-413/t-CO2: 5-95% range, 2020 U.S. dollars) at a near-term risk-free discount rate of 2%, a value 3.6-times higher than the U.S. government’s current value of $51/t-CO2.”1 The so what and why questions were covered with great care in the Washington Post (WAPO).2 The WAPO coverage is, in fact, a perfect place to start an assessment of how to incorporate this new information into an appropriate evaluation of what new results mean for decisionmaking; significantly different results in one new paper need to be considered, but they do not yet mean that conventional wisdom needs to be completely overturned—at least, not yet.

The point is that science and social science evolve. As a reviewer of this module pointed out, in the same summer of 2022, many recent articles had unpacked the SCC and detailed major new innovations that lead to strong support for substantially adjusting the values used by the U.S. government. In addition to the contribution of Working Group II to the IPCC’s sixth assessment report in 2022 and the National Academies’ review of the foundations of the social cost of carbon in 2017, Joseph Aldy et al. (2021) had contributed a new but not so dramatic analysis one year earlier;3 but climate-related extreme events exploded in the meantime. The bottom line, therefore, is that a distribution of the social cost of carbon, methane, or nitrous oxides is just a snapshot in time. The economics are dynamic because the science is dynamic, and the science is dynamic because climate change is accelerating.

Still, estimates of the social costs are the only vehicle for conveying aggregate damage information to decisionmakers across the United States and the rest of the world. They have begun to play an important role in many climate-related judicial proceedings, and Cass Sunstein has argued convincingly that keeping the distribution of values in line with modern science and economics is essential to its legal durability.4

It is important to emphasize that social cost estimates complement risk management strategies. Using the social cost of carbon enables one to weigh the risks of emissions against the costs of mitigating them by setting, for example, a temperature target or an emissions target.

VI. Conclusions

On both a micro scale and a macro scale, no investment in adaptation or mitigation will ever be sufficient to guarantee complete security, even in a stationary climate. In a dynamic climate, the challenge is so great that decisionmakers, as well as the general public, must admit to that reality. It will always be possible to ameliorate some damage, but it is impossible to preclude some level of residual damage. Therefore, at least two fundamental questions must be explored, perhaps in a courtroom: “How much risk (likelihood times unavoidable damage) is intolerable?” and “How can anybody be assured that any accepted temperature target (e.g., 1.5 or 2.0 degrees Celsius) or atmospheric concentration threshold (e.g., 350 parts per million) or personally articulated intent will continue to be sustained when both the likelihood and consequences of a breach are changing and therefore unknown?” But, what is truly unknown?

John Holdren, as science advisor to President Barack Obama, frequently remarked that people have three choices when they think about responding to climate risks. One is to mitigate (abate or limit) the possibility of climate impacts through actions and investments designed to reduce greenhouse gas emissions. The second is to adapt to climate change through actions and investments designed to reduce the consequences of climate effects. The third is to suffer. Maximally avoiding the third option—though never completely avoiding it—will depend critically upon our abilities to process information about the relative likelihoods of climatic changes and the projected consequences of those changes.