Climate Methods in Brief
Climate Model Evaluation
The World Climate Research Programme’s Coupled Model Intercomparison Project (CMIP) is a worldwide effort to establish a set of standard experimental protocols for the use of general circulation computer models (also called global climate models), or GCMs, in the development of climate scenarios. Essentially, the CMIP project is an attempt by the world’s climate modelers to improve the performance of GCMs, standardize methods of model evaluation, and make GCM outputs directly comparable.
Results from the CMIP project’s latest phase (Phase 5 or CMIP5) began to be available around the time of the launch of WW2100. CMIP5 ushered in a new wave of GCMs and resulting climate data, and was seen by the WW2100’s climate modeling team as an opportunity to both use the new CMIP5 GCMs in WW2100’s work and assess the performance of the CMIP5 models for the United States Pacific Northwest as a whole.
To that end, the WW2100 climate team assessed 41 GCMs from CMIP5 for their ability to simulate various aspects of climate in the Pacific Northwest. The team’s results were published in the Geophysical Research Letters: Atmospheres, where the researchers’ full results and methods can be reviewed. What follows is a summary of the methods employed and how it aided WW2100.
As the WW2100 team stated in their paper, their goal in evaluating the CMIP5 models was “to evaluate model performance in order to make informed recommendations to those who may use these model outputs” (Rupp et al., 2013). The researchers determined these “downstream” users, including resource managers and other scientists assessing climate impacts, would be best served by GCMs that gave the best statistical fit to the observed climate of the Pacific Northwest. (The team defined the Pacific Northwest as the area within longitude 124.5° and 110.5° W and in latitude 41.5° and 49.5° N, or roughly Oregon, Washington, Idaho, and western Montana.)
To find the best fit, the team evaluated the CMIP5 GCMs according to their ability to re-create in computer simulations the observed historical climate of the 20th century. This hindcasting ran from 1850-2005 and focused on temperature and precipitation. Observed climate data were taken from five gridded datasets of monthly means. A suite of statistics, or metrics, were calculated from both the hindcasts and observations and then compared. These metrics included mean seasonal values, interannual variability, amplitude of the seasonal cycle, consistency in spatial patterns, and sensitivity to the El Niño Southern Oscillation, among others. The researchers used two methods for ranking the performance of the GCMs based on these metrics: The first method assigned equal weight to each of the metrics. The second method excluded those metrics that were not considered robust. This involved ranking the metrics under the assumption that certain metrics might be more important in the assessment process. It also included an attempt to avoid redundancy, given that not all metrics are independent of one another.
The result was a ranking of the models according to metrics that led to a subset of GCMs that the team’s methods determined were the best statistical fit for the climate of the Pacific Northwest.
Figure 1. A depiction of the climate model assessment work done for WW2100. Models are listed at the bottom. On the left are meteorological measures, including temperature and precipitation. The graph depicts a relative error, in this case how well the models compare relative to each other when matched against actual historical measures for the Northwest. Here warm colors depict higher degrees of error and cooler colors less error. The models are organized from left (least error) to right (most error). (Image Source: Rupp et al., 2013)
Emissions Scenarios Selection
The biggest certainty in climate science is that increasing greenhouse gas (GHG) concentrations, especially carbon dioxide, are heating the Earth’s atmosphere. Precisely how much regional climate temperatures will increase in response to a given rise in GHGs is not known, which is why climate researchers examine more than one GCM. However, the biggest uncertainty about future climate by the end of the 21st century stems from not knowing just how much GHG human industry will continue to emit.
With their chosen models, the WW2100’s climate team used GCM output (available from the CMIP5 project) for two emission scenarios to incorporate a range of GHG concentration uncertainty. These emission scenarios are known as Representative Concentration Pathways, or RCPs, a category created by researchers convened to support the work of the United Nation’s Intergovernmental Panel on Climate Change (IPCC).
RCPs are the new standard for modeling emission uncertainty. There are four RCPs representing different concentrations of greenhouse gases, or different possible futures based on how much GHGs human industry might emit. These scenarios are: RCP 8.5, RCP 6, RCP 4.5, and RCP 2.6. Here, higher numbers represent a greater degree of radiative forcing (in terms of W m2 over preindustrial levels; e.g., 8.5 = 8.5 W m2) that the scenarios are expected to produce by 2100 (RCP 8.5 is bigger than RCP 6 and so on) and, hence, represents a future scenario with more emissions.
For WW2100, we employed two RCPs: RCP 8.5 (the high emissions scenario that assumes human industry will continue to emit greenhouse gases at a growing rate) and RCP 4.5 (a middle-of-the-road scenario in which emissions will be curbed starting in the middle of this century).
Selecting Representative Climate Scenarios Using a Sensitivity Approach
Resource managers and other downstream users of GCM data often lack the ability to process data from multiple climate models and scenarios, which requires considerable computing resources. This is especially true when considering other types of future scenarios in their impacts work, such as economics, demographics, and land uses, which require still more computing resources. What is often done instead is to select a subset of representative models and scenarios. For WW2100, we selected a subset of three “representative” scenarios, that is scenarios that are representative of GCMs run with the RCPs, the data from which could then be fed into WW2100’s latter modeling.
To find their subset of scenarios, the team conducted a sensitivity analysis in the Willamette River Basin that allowed them to select three representative GCMs from the 33 CMIP5 climate models for which future climate scenarios were available (from the 41 GCMs evaluated in the first step).
The sensitivity analysis used simple perturbation experiments to estimate how changes in temperature and precipitation affect summertime streamflow in the Willamette River Basin. More specifically, using the Variable Infiltration Capacity (VIC) Macroscale Hydrologic Model, we simulated streamflow using historical weather data from 1975-2004. Then, we ran a perturbation: Using the same set up, we ran the hydrologic model again, but with incremental increases in temperature. The percent in which summertime streamflow changes in these perturbation experiments provides an estimate for how sensitive it will be in a warmer climate. The same type of perturbation experiments were then repeated, but this time for incremental precipitation increases and decreases.
From this, we then used these derived sensitivities to draw contours of constant summertime streamflow change on a scatter plot of temperature and precipitation changes in GCM output. With the contours as guides, the team selected GCMs and accompanying RCPs with the objective of spanning a wide range of warming (high, middle, and low) while also spanning a wide range of hydrological impact. Where multiple models were available to choose from in each category (high, middle, low), the team chose one of the better performing GCMs according to the model ranking discussed above.
Figure 2. Representative scenarios selection plot for summertime streamflow change in the Willamette River Basin. Contours represent constant change in streamflow calculated from perturbation experiments. GCM precipitation and temperature changes are based on differences between 1970-1999 and projected changes for the period 2041-2070. Climate models are listed on the right. Models were run using the emissions scenarios RCP 4.5 (low scenario) and RCP 8.5 (high scenario), shown in blue and red respectively. (Image Source: Vano et al., 2015)
From this analysis, three scenarios, now a combination of GCMs that had been run with the two separate RCPs, were ultimately selected.
The final representative selections are: the High Climate Change scenario (the HadGEM2−ES climate model run with RCP 8.5); the Reference Case scenario (the MIROC5 climate model run with RCP 8.5); and the Low Climate Change scenario (the GFDL−ESM2M climate model run with RCP 4.5). Note: GFDL-ESM2M was not ranked highly by performance, but it was selected nonetheless to represent the Low Climate Change scenario, as none of the lowest-warming models were ranked highly. Hereafter the scenarios will be referred to as HighClim, Reference, and LowClim.
The LowClim scenario represents a small temperature increase and small decrease in summertime streamflow (lowest impact). The HighClim scenario represents a large temperature increase and large decrease in summertime streamflows (highest projected impacts). The Reference scenario lies between the two extremes.
It’s worth noting that an additional constraint was placed on the selection process: that all requisite data for a given GCM was available for the downscaling procedure (see section on downscaling below). This constraint ultimately limited the team to choosing from among 20 GCMs.
In order to feed data from the HighClim, Reference, and LowClim scenarios into WW2100’s later modeling work, the team performed a process called downscaling. Downscaling is a means to convert or translate the coarse resolution of GCM grids (which are as large as 375 km, roughly 233 miles, to a side) down to a finer resolution, which for WW2100 was about 4 km, roughly 2.5 miles. This is done to account for the details of local topography and local climate. This adjustment is needed in the Pacific Northwest, which has a complex mountainous topography that does not appear in a detailed form in the GCMs.
For the WW2100, we downscaled data from the HighClim, Reference, and LowClim scenarios for the Willamette Valley. The method used was the Multivariate Adaptive Constructed Analogs (MACA), a downscaling method developed by University of Idaho (UI) researcher John Abatzoglou. Resulting data from the downscaling was then fed into the Envision model’s component models.