8.1.3 How Are Models Constructed?
The fundamental basis on which climate models are constructed has not changed since the TAR, although there have been many specific developments (see Section 8.2). Climate models are derived from fundamental physical laws (such as Newton’s laws of motion), which are then subjected to physical approximations appropriate for the large-scale climate system, and then further approximated through mathematical discretization. Computational constraints restrict the resolution that is possible in the discretized equations, and some representation of the large-scale impacts of unresolved processes is required (the parametrization problem).
8.1.3.1 Parameter Choices and ‘Tuning’
Parametrizations are typically based in part on simplified physical models of the unresolved processes (e.g., entraining plume models in some convection schemes). The parametrizations also involve numerical parameters that must be specified as input. Some of these parameters can be measured, at least in principle, while others cannot. It is therefore common to adjust parameter values (possibly chosen from some prior distribution) in order to optimise model simulation of particular variables or to improve global heat balance. This process is often known as ‘tuning’. It is justifiable to the extent that two conditions are met:
1. Observationally based constraints on parameter ranges are not exceeded. Note that in some cases this may not provide a tight constraint on parameter values (e.g., Heymsfield and Donner, 1990).
2. The number of degrees of freedom in the tuneable parameters is less than the number of degrees of freedom in the observational constraints used in model evaluation. This is believed to be true for most GCMs – for example, climate models are not explicitly tuned to give a good representation of North Atlantic Oscillation (NAO) variability – but no studies are available that formally address the question. If the model has been tuned to give a good representation of a particular observed quantity, then agreement with that observation cannot be used to build confidence in that model. However, a model that has been tuned to give a good representation of certain key observations may have a greater likelihood of giving a good prediction than a similar model (perhaps another member of a ‘perturbed physics’ ensemble) that is less closely tuned (as discussed in Section 8.1.2.2 and Chapter 10).
Given sufficient computer time, the tuning procedure can in principle be automated using various data assimilation procedures. To date, however, this has only been feasible for EMICs (Hargreaves et al., 2004) and low-resolution GCMs (Annan et al., 2005b; Jones et al., 2005; Severijns and Hazeleger, 2005). Ensemble methods (Murphy et al., 2004; Annan et al., 2005a; Stainforth et al., 2005) do not always produce a unique ‘best’ parameter setting for a given error measure.