# Notes: ENVECON 176 / Climate Change Economics

Published:

These notes are for Summer 2021’sClimate Change Economics, a course that introduces global warming problems and solutions through an economic lens.

## Climate Science

The Sun emits energy in the form of visible light, while the Earth emits energy in the form of heat (or infrared light). This is because Earth emits only a tiny fraction of the Sun’s energy, and higher levels of energy emit shorter wavelengths.

On average, the Earth receives 342 W per square meter of energy. A significant proportion is reflected back to the surface, in the greenhouse effect. It is mediated by trace gases, which comprise just 0.04% of the atmosphere. The biggest is carbon dioxide, but methane also has a strong effect. Water vapor is the strongest contributor the greenhouse effect, followed by CO2, ozone, methane, and N2O.

Some components cool the climate by reflecting light, include pollutants from coal and oil plants. Aerosols reflect and absorb sunlight, while also creating smaller cloud droplets that increase reflection and suppress rainfall.

Climate sensitivity is amplified by feedback cycles - water heating, reduce ice and snow, and change in cloudiness. There are a variety of estimates from tweaking models, but the IPCC puts the estimate at around 3 degrees Celsius. Meanwhile, transient climate sensitivity (which is measured at a doubling of CO2) is between 1.5 and 2.5 degrees Celsius.

CO2 emissions from fossil fuels and land use change are absorbed by the deep ocean. “Peak oil” is a myth - there are huge reserves of oil and coal. Both land and ocean absorb less CO2 as the temperature increases.

Anthropogenic methane emissions comprise livestock, rice cultivation, landfill, and fossil fuels. Methane is harder to regulate because these sources are decentralized. The lifetime of methane is much smaller.

We can use ice cores to measure CO2 concentration and temperature change. Tree rings are less reliable. Temperatures and CO2 emissions have risen dramatically in the last 50 years, while ice and snow have retreated. These are anthropogenic: models that simulate natural forcings fail to explain the rises. The increases are varied across the globe. Models for precipitation are much less clear. Sea level rise will be driven by thermal expansion, glaciers melting, and ice sheets melting.

## Emissions

Fossil fuel emissions are measured in both CO2 and purely in carbon. Even the largest emitter - China, which has the largest population - emits only 30% of global emissions. On a per capita basis, the United States, Canada, and Australia emit the most emissions. Other developed countries like Germany emit much less, per capita.

The fuel mix of countries varies significantly. Fossil fuels include gas, oil, and coal. Biomass is generally carbon neutral. Nuclear/hydro/other renewables are carbon-free. Brazil relies mostly on hydro, South Korea has a ton of nuclear, while the U.S. and China rely on fossil fuels.

The biggest sector is electricity is heating, followed by industrial processes, transportation, and land use change. Within transportation, most GHGs come from cars. Within industry, chemicals, cement, and iron/steel produce the most. Within agriculture, most emissions come from soil (N2O) and fermentation (CH4). Two thirds of building use emissions are from residential. Emissions from waste come mostly from landfills and water treatment in the form of methane.

Emissions grew at around 2% per year from 1970 to now, with a slowdown 90s following the collapse of the Soviet Union. Those former communist countries, “economies in transition,” slowed down in that period. The biggest driver of growth is Asia, which grew at 5% per year from 2000 to 2010. On a per capita basis, global per capita emissions have been roughly flat. By income, upper middle income groups have driven most of this increase. Emissions have increased for all greenhouse gases.

The Kaya equation decomposes emissions in population, per capita income, energy intensity, and carbon intensity: $Emissions = Population (\frac{Income}{Population}) (\frac{Energy}{Income}) (\frac{Emissions}{Energy})$ While per capita income growth has increased for all regions, the energy intensity has increased in the middle east and Africa and carbon intensity has increased in Asia.

Scenarios are not predictions but storylines of plausible futures. In practice, people mistakenly use them as predictions. The first set are IS92, six scenarios that were used throughout the 90s. The next set, SRES, included four storylines. Within SRES, it evaluated business-as-usual scenarios among globalization/regionalization versus economic/environmental development - the A2 scenario, with rapid population and emissions growth, is most like the real world.

The third set, RCP, included purely physical features and is standard today; it’s harder to use in economic models. The RCP scenarios model mitigation, with scenarios featuring reduced radioactive forcing. The RCP8.5 scenario is very similar to the A2 scenario.

We can convert between CO2 and C using the ratio of atomic weights - 44 and 12. We convert non-CO2 concentrations into equivalent CO2 concentrations by expressing them in terms of radioactive forcings. Emissions are more difficult to equate because the efficiency of gases differ - a common solution is to pick a time horizon T and aggregate the cumulative radioactive forcings.

## Reductions and Costs

A technique is a combination of input goods for a firm that produces a specified level of output. A firm will choose an efficient technique (where you cannot reduce the level of one of the input goods while maintaining the same input) that minimizes usage of input goods. The efficient techniques form an isoquant, a line that divides inefficient techniques from infeasible techniques. Isoquants must slope down and be strictly increasing in output as they radiate outward.

An isocost line depicts sets of inputs that correspond to the same cost. A firm will pick a set of inputs on an isoquant that is tangent to the isocost. Using the tangent, the total cost can be inferred from a given quantity of output.

Each technique produces a certain amount of pollution (in this case, greenhouse gases). An isopollution curve depicts combinations of goods that produce the same level of pollution. Lowering pollution is costly, so firms have an incentive to pollute.

Governments have many tools to reduce pollution. With a technology standards, the government forces a firm to use a certain technology. The disadvantage is that they require industry knowledge. An effluent standard gives a pollution cap based on a level of input that is enforced by a fine. A pollution tax changes the price of pollutants, changing the firm’s cost structure.

Firms can adjust the level of output. Government regulations shift the cost curve upward. Assuming a firm is a price taker, the firm produces at the point where the marginal cost equals the price. In a market, a regulation reduces the quantity and increases the price. A tax, consequently, reduces welfare.

We can graph marginal abatement costs as price per emission against emissions reduction. There many emission reduction costs where there are negative engineering costs. But, these cost estimate ignore consumer preferences, principal agent problems, information problems, transaction costs, and liquidity constraints.

With revenue recycling, a carbon tax can produce net welfare. It can be used to reduce income tax, increasing economic activity by incentivizing labor.

Abatement costs can change as a result of technological progress via capital stock turnover, R&D, learning-by-doing. Over time, abatement costs decrease.

A study of emission costs varied the technologies introduced, concentration targets, interpretation of the target, and delayed participation. The 450ppm scenario is no longer plausible. The models disagree significantly on the costs.

## Climate Change Impacts

Climate change has mostly negative impacts on water availability, ecosystems, food, coastlines, and health. There are some positive impacts - water availability in tropics and cereal productive in mid/high lattitudes. But overall, the impacts appear negative.

For agriculture, an area will increase yields if it currently has suboptimal temperatures. Increased CO2 also directly increase yields. A process based model models the growth of the plants themselves. It captures the science of crops and are accurate for different climate conditions. But, it often ignores adaption options and other important aspects. Empirical models fit a regression model to predict yield, farmland value, and profits based on changes in the climate. They involve fewer assumptions and capture the full range of natural/human reactions. However, they require much more data (that satellites are now providing). Example empirical models are cross-sectional models (comparing areas and trying to control for variables) and fixed effect panel models (difference in outcomes across points in time, although these can be attributed to weather shocks).

Studies suggest largely negative impacts on agriculture—for crops like marize, wheat, rice, and soy—with exceptions in particular regions. Crops seem to have a threshold temperature after which they die off quickly. They also suggest increases in energy consumption, as the need for air conditioning is unbounded and outweighs the reduced need for heating. Water availability is mixed - in many areas, models disagree; generally, areas that are already dry will have even greater water problems. Sea level rise could submerge important urban areas, although some adaptation measures could mitigate the harms. Global warming will multiply the risk of malaria transmission. Additional impacts include forestry, time use, catastrophic impacts, human settlements, water resources, and ecosystems. Other impacts not included are productivity, conflict, and crime.

The general strategy for economic valuation is to add up all relevant components. Total economic value is the some of use value and non-use value. Use value can be either direct (e.g. the land a farmer uses) or indirect (an ecosystem cleaning the water that reaches the value). Non-use value comprise existence value (as for polar bears), altruistic value, and option value. Economic value is anthropogenic and consequentialist.

Value can be aggregated or computed sector-by-sector. Aggregate approaches (e.g. regression on GDP) lose out on details but are comprehensive. Sector-by-sector approaches are more common. For market goods like food, price times quantity is a good approximation of the willingness to pay. With climate change, it is more expensive for farmers to produce a given quantity of a good. For a large net exporter in an international market, an increase in price may offset lost surplus for consumers and suppliers.

For non-market goods, revealed preference methods can use proxies to infer prices. Some methods includes travel costs and hedonic pricing. States preference methods ask people to value or rank goods, but these suffer from strategic bias (they strategically give misleading answers), information bias (a respondent could be unexperienced in a domain), framing bias (anchoring, ordering, etc. can influence answers), and hypothetical bias (people can give answers without following through). In a benefit transfer study, you compare values across locations. The value of a statistical life may assign more value to lives in rich countries.

Dynamic vulnerability refers to changes in systems that change the damage estimates from climate change. The changes themselves are not themselves caused by climate change or a reaction to climate change. One example is income; whether a rise income increases or decreases damage estimates is an empirical question. Examples of adaptation include managed retreat and cooling. Net damages are equal to gross damages minus the benefit of adaptation plus the cost of adaptation. The optimal level of adaptation is where the marginal cost of adaptation is equal to the marginal benefit of adaptation. Assessment models usually assume optimal adaptation decisions.

Between 2 and 3 degrees Celsius, most prior studies reach damage estimates of 3% of GDP. More recent studies based on statistical methods have found much greater damage estimates. Damage estimates project benefits for countries like Canada, but harms for most poor countries. Although some welfare estimates show benefits to some climate change, further increases in warming are harmful. The UNFCC didn’t specify how much to reduce mechanisms or how to enforce standards, but it did provide a framework for future decisions.

## Cost Benefit Analysis

The net present value is related to the discount rate $r$ via the discount factor: $\frac{1}{1+r}$ A high discount rate implies that future payments are worth less. For climate policy, that means how much we care about future costs and benefits. The real discount rate is the nominal discount rate adjusted for inflation. The NPV equation is: $NPV = \sum_{t=0}^T \frac{B_t - C_t}{(1+r)^t}$ The calculate the social cost of carbon, we aggregate the difference between damages in the base run and the run with extra emissions: $SCC = \sum^T_{t=0} \Delta D_t \times \frac{1}{(1+r)^t}$ If we consider emitting CO2 in a particular year, we discount back to that particular year: $SCC_t = \sum^T_{t=s} \Delta D_s \times \frac{1}{(1+r)^{s-t}}$ In a CBA, we evaluate damages under BAU and a policy, compute the delta, and calculate the NPV. SCC is an approximation method for CBA.

Two justifications for discount are consumption smoothing (you’ll be better off tomorrow) and impatience (whether due to risk of death or other reasons). Consumption smoothing is modeled by a convex utility function $U(c) = \ln(c)$. $NPV = FuturePayment \times \frac{U'(c_f)}{U'(c_p)} = FuturePayment \times \frac{c_p}{c_f}$ If we let $g_t$ represent the consumption growth rate in year $t$, we have: $c_f = c_p \times \prod_{s=1}^t 1 + g_s, DF_{CS}^t = \frac{c_p}{c_f} = \frac{1}{\prod_{s=0}^t 1+g_s}$ We model impatience with $\rho$, the pure rate of time preference, and $DF_{IP}^t = \frac{1}{1+\rho}^t$ is the time discount factor.

In Ramsey discounting, we apply both discount factors: $DF_t = DF_{CS}^t \times DF_{IP}^t \approx \frac{1}{1 + \rho +g_0} \times ... \times \frac{1}{1 + \rho + g_t}$ The Ramsey discount rate is approximately $r_t = \rho + \eta g_t$, where $\eta$ controls the strength of the decline in the utility function. The Ramsey discounting is interpreted as the preference of a global social planner.

In equity weighting, you use these utility functions to give greater weight to people with lower current consumption. A caveat to equity weighting is that you shouldn’t use it if you can distribute wealth via transfers. The process is: convert all damages into utility, time discount the utilities, sum them up, and convert that into equivalent present consumption.

Two philosophies on the discount factor are the descriptive approach (what is the return on the best alternative?) and the prescriptive approach (how should consumption be spread over time?). The prescriptive approach includes the standard Ramsey approach and equity weighting. For the descriptive approach, you have a circular problem where you need to use the Ramsey discount rate to evaluate the best alternative.

To model changing discount rates, we can let $r_t$ denote the change between the discount factor at time $t$ and $T+1$ so that: $DF_t = \prod_{s=0}^t \frac{1}{1+r_s}$ Over time, the lowest discount rate dominates. Weitzman’s declining discount rate provides a schedule for declining discount rates.

## Optimal Climate Policy

Climate policy can change over time, so the optimal dynamic climate policy can be selected as: $\max_{A_1,..., A_T} \sum_t [B_t(S_t) - C_t(S_t)] \times DF_t, S_{t+1} = g(S_t, A_t)$ Command and control involve imposing standards. Market based approaches like taxes and cap-and-trade build-in a lot more flexibility. Cost effectiveness is about distributing reductions across firms to minimize costs. Efficiency analysis also conducts a cost-benefit test, so it is more stringent. The distribution that is most effective is where the marginal abatement costs to all firms is the same.

In a cap and trade system, firms need a permit to emit each unit of pollution. The initial allocation determines which firms come out as winners. Firms with more permits and lower reduction costs can profit. While the cap and trade achieves a specific reduction and is cost effective, a regulator may not know what the actual cost. A trading scheme exists in the EU and was attempted in the US.

With a carbon tax, firms must pay a tax per unit of emissions. With a Pigovian tax, the regulator doesn’t know how much reduction will occur, Firms cannot profit from a carbon tax, but the revenue can be used for something good. A carbon tax is cost-effective because firms will reduce emissions up until the marginal reduction cost is the carbon tax.

If a regulator knew abatement costs and damage costs, then we could just implement command and control. Choosing between cap-and-trade and carbon tax involves a comparison of expected quantity inefficiency and price inefficiency. A flatter damages curve and a steeper savings curve favors a carbon tax, and vice versa.

In a broader view, carbon pricing makes emissions reductions a public good. In the EU ETS, the first and second phases failed because too many permits were handed out, The third phase involves auctioning allowances. In phase IV, more measures are build-in to stabilize the price. The EU ETS faced windfall profits and crime.

Tags: