1.5.3 Coupled Models: Evolution, Use, Assessment
The first National Academy of Sciences of the USA report on global warming (Charney et al., 1979), on the basis of two models simulating the impact of doubled atmospheric CO2 concentrations, spoke of a range of global mean equilibrium surface temperature increase of between 1.5°C and 4.5°C, a range that has remained part of conventional wisdom at least as recently as the TAR. These climate projections, as well as those treated later in the comparison of three models by Schlesinger and Mitchell (1987) and most of those presented in the FAR, were the results of atmospheric models coupled with simple ‘slab’ ocean models (i.e., models omitting all changes in ocean dynamics).
The first attempts at coupling atmospheric and oceanic models were carried out during the late 1960s and early 1970s (Manabe and Bryan, 1969; Bryan et al., 1975; Manabe et al., 1975). Replacing ‘slab’ ocean models by fully coupled ocean-atmosphere models may arguably have constituted one of the most significant leaps forward in climate modelling during the last 20 years (Trenberth, 1993), although both the atmospheric and oceanic components themselves have undergone highly significant improvements. This advance has led to significant modifications in the patterns of simulated climate change, particularly in oceanic regions. It has also opened up the possibility of exploring transient climate scenarios, and it constitutes a step toward the development of comprehensive ‘Earth-system models’ that include explicit representations of chemical and biogeochemical cycles.
Throughout their short history, coupled models have faced difficulties that have considerably impeded their development, including: (i) the initial state of the ocean is not precisely known; (ii) a surface flux imbalance (in either energy, momentum or fresh water) much smaller than the observational accuracy is enough to cause a drifting of coupled GCM simulations into unrealistic states; and (iii) there is no direct stabilising feedback that can compensate for any errors in the simulated salinity. The strong emphasis placed on the realism of the simulated base state provided a rationale for introducing ‘flux adjustments’ or ‘flux corrections’ (Manabe and Stouffer, 1988; Sausen et al., 1988) in early simulations. These were essentially empirical corrections that could not be justified on physical principles, and that consisted of arbitrary additions of surface fluxes of heat and salinity in order to prevent the drift of the simulated climate away from a realistic state. The National Center for Atmospheric Research model may have been the first to realise non-flux-corrected coupled simulations systematically, and it was able to achieve simulations of climate change into the 21st century, in spite of a persistent drift that still affected many of its early simulations. Both the FAR and the SAR pointed out the apparent need for flux adjustments as a problematic feature of climate modelling (Cubasch et al., 1990; Gates et al., 1996).
By the time of the TAR, however, the situation had evolved, and about half the coupled GCMs assessed in the TAR did not employ flux adjustments. That report noted that ‘some non-flux-adjusted models are now able to maintain stable climatologies of comparable quality to flux-adjusted models’ (McAvaney et al., 2001). Since that time, evolution away from flux correction (or flux adjustment) has continued at some modelling centres, although a number of state-of-the-art models continue to rely on it. The design of the coupled model simulations is also strongly linked with the methods chosen for model initialisation. In flux-adjusted models, the initial ocean state is necessarily the result of preliminary and typically thousand-year-long simulations to bring the ocean model into equilibrium. Non-flux-adjusted models often employ a simpler procedure based on ocean observations, such as those compiled by Levitus et al. (1994), although some spin-up phase is even then necessary. One argument brought forward is that non-adjusted models made use of ad hoc tuning of radiative parameters (i.e., an implicit flux adjustment).
This considerable advance in model design has not diminished the existence of a range of model results. This is not a surprise, however, because it is known that climate predictions are intrinsically affected by uncertainty (Lorenz, 1963). Two distinct kinds of prediction problems were defined by Lorenz (1975). The first kind was defined as the prediction of the actual properties of the climate system in response to a given initial state. Predictions of the first kind are initial-value problems and, because of the nonlinearity and instability of the governing equations, such systems are not predictable indefinitely into the future. Predictions of the second kind deal with the determination of the response of the climate system to changes in the external forcings. These predictions are not concerned directly with the chronological evolution of the climate state, but rather with the long-term average of the statistical properties of climate. Originally, it was thought that predictions of the second kind do not at all depend on initial conditions. Instead, they are intended to determine how the statistical properties of the climate system (e.g., the average annual global mean temperature, or the expected number of winter storms or hurricanes, or the average monsoon rainfall) change as some external forcing parameter, for example CO2 content, is altered. Estimates of future climate scenarios as a function of the concentration of atmospheric greenhouse gases are typical examples of predictions of the second kind. However, ensemble simulations show that the projections tend to form clusters around a number of attractors as a function of their initial state (see Chapter 10).
Uncertainties in climate predictions (of the second kind) arise mainly from model uncertainties and errors. To assess and disentangle these effects, the scientific community has organised a series of systematic comparisons of the different existing models, and it has worked to achieve an increase in the number and range of simulations being carried out in order to more fully explore the factors affecting the accuracy of the simulations.
An early example of systematic comparison of models is provided by Cess et al. (1989), who compared results of documented differences among model simulations in their representation of cloud feedback to show how the consequent effects on atmospheric radiation resulted in different model response to doubling of the CO2 concentration. A number of ambitious and comprehensive ‘model intercomparison projects’ (MIPs) were set up in the 1990s under the auspices of the World Climate Research Programme to undertake controlled conditions for model evaluation. One of the first was the Atmospheric Model Intercomparison Project (AMIP), which studied atmospheric GCMs. The development of coupled models induced the development of the Coupled Model Intercomparison Project (CMIP), which studied coupled ocean-atmosphere GCMs and their response to idealised forcings, such as a 1% yearly increase in the atmospheric CO2 concentration. It proved important in carrying out the various MIPs to standardise the model forcing parameters and the model output so that file formats, variable names, units, etc., are easily recognised by data users. The fact that the model results were stored separately and independently of the modelling centres, and that the analysis of the model output was performed mainly by research groups independent of the modellers, has added confidence in the results. Summary diagnostic products such as the Taylor (2001) diagram were developed for MIPs.
The establishment of the AMIP and CMIP projects opened a new era for climate modelling, setting standards of quality control, providing organisational continuity and ensuring that results are generally reproducible. Results from AMIP have provided a number of insights into climate model behaviour (Gates et al., 1999) and quantified improved agreement between simulated and observed atmospheric properties as new versions of models are developed. In general, results of the MIPs suggest that the most problematic areas of coupled model simulations involve cloud-radiation processes, the cryosphere, the deep ocean and ocean-atmosphere interactions.
Comparing different models is not sufficient, however. Using multiple simulations from a single model (the so-called Monte Carlo, or ensemble, approach) has proved a necessary and complementary approach to assess the stochastic nature of the climate system. The first ensemble climate change simulations with global GCMs used a set of different initial and boundary conditions (Cubasch et al., 1994; Barnett, 1995). Computational constraints limited early ensembles to a relatively small number of samples (fewer than 10). These ensemble simulations clearly indicated that even with a single model a large spread in the climate projections can be obtained.
Intercomparison of existing models and ensemble model studies (i.e., those involving many integrations of the same model) are still undergoing rapid development. Running ensembles was essentially impossible until recent advances in computer power occurred, as these systematic comprehensive climate model studies are exceptionally demanding on computer resources. Their progress has marked the evolution from the FAR to the TAR, and is likely to continue in the years to come.