2.6.6. Building Experience with Subjective Methods In a "Science for
Policy" Assessment
Although one might be tempted to infer from the foregoing arguments that judgments
of likelihood should be considered only with caution, for some decision analytic
frameworks that often appear in the climate policy literature (.g., cost-benefit
analysis and IAMs), there often are few viable alternatives. However, as noted
in the decision analysis frameworks guidance paper (Toth, 2000a; see also Section
2.4), several alternative decisional analytic methods are less dependent
on subjective probability distributions; virtually all frameworks do require
subjective judgments, however. Although physical properties such as weight,
length, and illumination have objective methods for their measurement, there
are no objective means for assessing in advance the probability of such things
as the value future societies will put on now-endangered species or the circulation
collapse of the North Atlantic Ocean from anticipated anthropogenic emissions.
Even a highly developed understanding of probability theory would be of little
avail because no empirical data set exists, and the underlying science is not
fully understood. Some authors have argued that under these circumstances, for
any practical application one ought to abandon any attempt to produce quantitative
forecasts and instead use more qualitative techniques such as scenario planning
(e.g., Schoemaker, 1991; van der Heijden, 1998) or argumentation (Fox, 1994).
On the other hand, othersthough noting the cognitive difficulties with
estimation of unique eventshave argued that quantitative estimations are
essential in environmental policy analyses that use formal and explicit methods
(e.g., Morgan and Henrion, 1990).
Given its potential utility in applied and conservation ecology, it seems surprising
that Bayesian analysis is relatively uncommon. However, logical and theoretical
virtue is not sufficient to encourage its use by managers and scientists. The
spread of a new idea or practice is an example of cultural evolution (in this
case, within the scientific community). It is best understood as a social and
psychological phenomenon (Anderson, 1998).
Helping to achieve such penetration of awareness of uncertainty analyses will
be a multi-step process that includes "1) consistent methods for producing verbal
summaries from quantitative data, 2) translation of single-event probabilities
into frequencies with careful definition of reference classes, 3) attention
to different cognitive interpretations of probability concepts, and 4) conventions
for graphic displays" (Anderson, 1998). The latter also is advocated in the
uncertainties guidance paper (Moss and Schneider, 2000), and an example is provided
in Chapter 7 (Figure 7-2).
Although all arguments in the literature agree that it is essential to represent
uncertainties in climatic assessments, analysts disagree about the preferred
approach. Some simply believe that until empirical information becomes available,
quantitative estimates of uncertain outcomes should be avoided because "science"
is based on empirical testing, not subjective judgments. It certainly is true
that "science" itself strives for "objective" empirical
information to test, or "falsify," theory and models (caveats in Section
2.5.2 about frequentism as a heuristic notwithstanding). At the same time,
"science for policy" must be recognized as a different enterprise
than "science" itself. Science for policy (e.g., Ravetz, 1986) involves
being responsive to policymakers' needs for expert judgment at a particular
time, given information currently available, even if those judgments involve
a considerable degree of subjectivity. The methods outlined above and in Moss
and Schneider (2000) are designed to make such subjectivity more consistently
expressed (linked to quantitative distributions when possible, as needed in
most decision analytic frameworks) across the TAR and more explicitly stated
so that well-established and highly subjective judgments are less likely to
get confounded in media accounts or policy debates. The key point is that authors
should explicitly state their approach in each case. Transparency
is the key to accessible assessments.
|