Q&A: Discerning Experts: The Practices of Scientific Assessment for Environmental Policy

Governments around the world rely on scientific assessments to guide environmental policy and action. Yet, these assessments, like those produced by the Intergovernmental Panel on Climate Change and other organizations, can sometimes exhibit limitations, including scientific bias and even errors of omission in their search for the facts.

A new book co-authored by Michael Oppenheimer of Princeton University uncovers the systemic bias and errors of omission within scientific assessments, presenting a roadmap for how the process can be improved. The book, “Discerning Experts: The Practices of Scientific Assessment for Environmental Policy,” was published in March 2019 by the University of Chicago Press.

Oppenheimer is the Albert G. Milbank Professor of Geosciences and International Affairs at the Woodrow Wilson School of Public and International Affairs, the Department of Geosciences, and the Princeton Environmental Institute. He is the director of the Center for Policy Research on Energy and the Environment (C-PREE) at the Woodrow Wilson School and faculty associate of the Atmospheric and Ocean Sciences Program, the Princeton Institute for International and Regional Studies.

In addition to Oppenheimer, the book was co-authored by Naomi Oreskes of Harvard University, Dale Jamieson of New York University, Keynyn Brysse of the University of Alberta, Jessica O’Reilly of Indiana University Bloomington, Matthew Shindell of the Smithsonian’s National Air and Space Museum, and Milena Wazeck of the Royal Society/British Academy.

Below, Oppenheimer answers some questions about his new book.

Q. Why did you write this book?

Oppenheimer: After having been an author on many assessments over several decades, particularly Intergovernmental Panel on Climate Change (IPCC) reports, and witnessing the way decisions are made by experts on what are the scientific facts surrounding an environmental problem, I came to appreciate the efficacy of the assessment process but also its imperfections. Specifically, on key questions characterized large uncertainty, I felt the answers delivered to governments might vary considerably depending on how the assessment was set up (i.e., institutional factors), who participates (who the expert authors are), and how deep the uncertainty is on particular questions.

So, nine years ago, I and colleagues now at Harvard and New York University established a research project to investigate the nature of environmental assessments with a view toward making recommendations that could improve their performance. Others have examined what makes any particular assessment successful in being accepted by governments and people. Our study was the first to examine how the experts actually make their decisions as a group and what factors influence those decisions beyond the science itself. The overall method of the project is ethnographic, combining interviews, archival research, and observation of expert deliberations. We studied three sets of assessments from the 1970s up to the past decade: A series of national and international assessments of ozone depletion beginning in the 1970s, the U.S. National Acid Deposition Assessment Program of the 1980s, and a series of assessments since the 1970s regarding the stability of the West Antarctic ice sheet (including IPCC’s five assessments).

Q. What are the biggest takeaways?

Oppenheimer: These are the biggest takeaways:

  • Assessments are not simply reviews or summaries of the existing scientific literature. Rather, they often create new knowledge and bring out important insights.
     
  • Assessments recognize that individual experts, like anyone else, are often biased in their judgments on what is true and what is uncertain. Accordingly, they are often structured to eliminate this problem by “balance of bias” — choosing authors whose views might cancel out each other’s scientific biases. Scientists with strong, publicly expressed views on public policy sometimes have been excluded entirely.
     
  • Although 95 percent of the “facts” decided by assessors are more or less incontrovertible and independent of institutional factors, the latter arrangements, like the choice of experts to author the assessment, the way material is organized within the study and resulting report, and the rules for how the assessment is to be conducted can have an important influence on some outcomes —  that is, what the facts are reported to be —  especially for some of the questions in each assessment that are most critical for policymakers and where uncertainty looms large.
     
  • Environmental assessments are an ongoing phenomenon, not a series of one-offs, that have an enormous effect on scientific research agendas. Each assessment also has a large effect on the next within the same subject area. Furthermore, author-experts often wind up assessing their own research and directing their research toward providing material for these assessments.
     
  • Assessments generally rely on a consensus model for their decisions, but many times this may be the least effective approach to providing judgments that are useful to policy makers.
     
  • As opposed to earlier times, assessments have steered away from policy advice and stuck much more closely than previously to just delivering a consensus about the science. This is unfortunate because it is important for all to recognize that science and policy are not neatly separable. In particular, the perception by scientists of what governments and the public might accept or reject may influence the scientific outcome of an assessment. Like everything else in life, there is no such thing as a “pristine” assessment.

Q. What are the policy implications?

Oppenheimer: Those sponsoring assessments, like the IPCC, should take a close look at the way their institutional setups and the demand for consensus limit the value of assessments and sometimes lead them to products that are not sufficiently helpful to policy makers. These arrangements truncate the information developed and stifle creativity by expert-authors. The problems being assessed are sufficiently important and threatening in many cases, so the sponsors need to be much more creative about the assessment process, willing to experiment and take risks, and be willing to free experts within assessments from unnecessary constraints. At the same time, all involved need to work to assure that science aimed at influencing assessments or generated by the needs of assessment doesn’t crowd out other types of research.