the observatory

Finding the Right Expert

How reporters should use a controversial new study categorizing scientists’ stances on global warming
June 29, 2010

A controversial new study that categorizes climate scientists as either “convinced” or “unconvinced” by the basic tenets of manmade global warming generated furious debate in the blogosphere last week, with some calling it a reaffirmation of scientific consensus and others calling it a “blacklist” of scientists in the latter group.

The paper found that 97 to 98 percent of “expert” climate scientists (i.e. those “most actively publishing in the field”) agree with “the primary conclusions of the Intergovernmental Panel on Climate Change: anthropogenic greenhouse gases have been responsible for ‘most’ of the ‘unequivocal’ warming of the Earth’s average global temperature over the second half of the 20th century.” It also found that “the relative climate expertise and scientific prominence of the researchers unconvinced of [anthropogenic global warming] are substantially below that of the convinced researchers.”

The study, published in the Proceedings of the National Academy of Sciences, has direct bearing on climate journalism. William Anderegg, the paper’s lead author and a doctoral candidate at Stanford University, told the BBC that his research team “felt that the state of the scientific debate was so far removed from the state of the public discourse and we felt that a good quantitative, rigorous comparison of this would put to rest the notion that the scientists ‘disagree’ about global warming.”

In fact, Anderegg’s study is only the latest effort to remedy this knowledge gap by attempting to quantify the scientific consensus on the fundamentals of climate science, following similar studies in 2009, 2008, and 2004. But to a seemingly greater degree than these earlier efforts, Anderegg et al. implicitly exhort journalists to use their data to exclude “non-experts” from their reporting:

Despite media tendencies to present both sides in [climate] debates, which can contribute to continued public misunderstanding regarding [anthropogenic climate change], not all climate researchers are equal in scientific credibility and expertise in the climate system. This extensive analysis of the mainstream versus skeptical/contrarian researchers suggests a strong role for considering expert credibility in the relative weight of and attention to these groups of researchers in future discussions in media, policy, and public forums regarding anthropogenic climate change.

Many climate campaigners applauded the study for “exposing the lack of credibility and expertise among climate skeptics.” But critics like Roger Pielke, Jr.—a professor of environmental studies at the University of Colorado who protested his inclusion on a list of climate skeptics drafted by Foreign Policy earlier this year—said the convinced-unconvinced labeling amounted to a “blacklist.”

Sign up for CJR's daily email

“By putting scientists into two categories which do not reflect the subtleties of the debate, … this paper simply reinforces the pathological politicization of climate science in policy debate,” Pielke, Jr. told ScienceInsider, the news site of the journal Science:

His father, Roger Pielke Sr., for example, was among the most prominent and cited of the “unconvinced.” But in an e-mail to ScienceInsider, the elder Pielke says that although greenhouse gas emissions are important to consider, so are land-use changes, black carbon and aerosol pollution—a position perhaps more nuanced than the convinced/unconvinced dichotomy the paper postulates.

Regardless of such nuance, influential climate bloggers Joe Romm and Chris Mooney suggest, respectively, that the new study “could theoretically open the eyes of those in the status quo media…” and that “the results mean that journalists who have given a lot of weight to climate ‘skeptics’ have some ’splaining to do. Essentially, this paper seems to be suggesting that they got the wrong ‘experts.’”

Perhaps, but whether or not a journalist has chosen the wrong source depends entirely on which question he or she is trying to answer—be it the credibility of a specific temperature record (like tree-rings) or the modeling of future climate scenarios—and even a few scientists that readily accept the basic tenets of global warming have expressed doubts about the usefulness of Anderegg’s study in this regard.

One might still think it fair to say, given his reservations about humans’ relative contribution to global warming, that Roger Pielke, Sr. is, in fact, unconvinced by the IPCC’s contention that our greenhouse gases have been responsible for most of the warming. But “even if you accept that the database is accurate and individuals have been accurately categorized,” that does not mean the study is a reliable guide for choosing sources, Georgia Tech climate scientist Judith Curry wrote in the comments section of a terrific roundup of the Anderegg study’s coverage at Keith Kloor’s Collide-a-Scape blog. The reason is this:

The scientific litmus test for the paper is the AR4 statement: “anthropogenic greenhouse gases have been responsible for “most” of the “unequivocal” warming of the Earth’s average global temperature over the second half of the 20th century”.

The climate experts with credibility in evaluating this statement are those scientists that are active in the area of detection and attribution. “Climate” scientists whose research areas is ecosystems, carbon cycle, economics, etc speak with no more authority on this subject than say Freeman Dyson.

I define the 20th century detection and attribution field to include those that create datasets, climate dynamicists that interpret the variability, radiative forcing, climate modeling, sensitivity analysis, feedback analysis. With this definition, 75% of the names on the list disappear. If you further eliminate people that create datasets but don’t interpret the datasets, you have less than 20% of the original list.

Such criticism does not mean that Anderegg’s study has no value to journalists, however. The database underpinning the research was actually created by James Prall, a computer systems programmer at the University of Toronto, who is listed as the second author on Anderegg’s paper. Using it is a bit tricky, however. The study links to a Web page containing links to the documents Anderegg et al. used to compile the names of 1,372 researchers considered in their study (which they then winnowed down to 908 by imposing a criterion that a researcher must have authored a minimum of twenty climate publications to be considered).

The list of “convinced” researchers included all contributors to the IPCC’s 2007 Working Group I report (which dealt with the science of climate change) as well as all signatories to four prominent scientific statements endorsing the IPCC conclusions. The list of “unconvinced” researchers included all signatories to twelve prominent statements criticizing the IPCC’s conclusions. It should be noted, however, that Prall’s database is actually quite a bit larger than the subset used for the Anderegg paper. It should also be noted that the page to which that paper provides a link does not include the lists that rank convinced and unconvinced researchers in terms of their “expertise” (number of papers published) or “prominence” (number of citations those papers have received). Those lists can be found elsewhere on Prall’s Web site, and they are perhaps the resources that journalists would find most useful. (Like the documents used for categorizing scientists as either convinced or unconvinced, however, the database used to compile each researchers publication and citation counts—Google Scholar versus the more traditionally accepted ISI Web of Science—has been criticized.)

Although there are problems with measuring a scientist’s expertise and prominence by the number of papers he or she has published and the number of times those papers have been cited, those metrics are generally considered to be reliable starting points for appraising a source. And that is were the real value of this database seems to lie—not for identifying who is convinced and unconvinced by the basic tenets of climate science, but rather for making first approximations of researchers’ overall credibility and contribution to their fields. But journalists need to conduct more thorough, secondary assessments of their own. In particular, although Prall’s database notes each scientist’s particular area of research and expertise, that information should be vetted and fleshed out through a careful evaluation of the scientist’s actual work.

This can be done by actually reading researchers’ papers, by asking other scientists to evaluate a potential source, and by consulting other databases such as the ISI Web and EurekAlert!’s guide to science sources. Additionally, on Friday, the American Geophysical Union announced that it is “establishing a new service in order to better address journalists’ needs for accurate, timely information about climate science.” So far, more than 115 “climate specialists” have signed up with the geophysical union to serve as sources for journalists. “The new referral service will receive journalists’ questions and other queries via emails or phone calls to AGU’s press office staff, who will then pass queries along quickly to appropriate scientist-volunteers,” according to the press release.

So how should journalists’ use Anderegg’s paper and the underlying database (which, it should be mentioned, has been around for over a year and thus predates the Anderegg et al. paper)? The simple answer, in my opinion, is: just like they would use a site like Wikipedia—as a useful starting point, to be treated warily, for a much more thorough evaluation of researchers credentials. After all, one thing is absolutely certain. Expertise does matter.

Curtis Brainard writes on science and environment reporting. Follow him on Twitter @cbrainard.