One might still think it fair to say, given his reservations about humans’ relative contribution to global warming, that Roger Pielke, Sr. is, in fact, unconvinced by the IPCC’s contention that our greenhouse gases have been responsible for most of the warming. But “even if you accept that the database is accurate and individuals have been accurately categorized,” that does not mean the study is a reliable guide for choosing sources, Georgia Tech climate scientist Judith Curry wrote in the comments section of a terrific roundup of the Anderegg study’s coverage at Keith Kloor’s Collide-a-Scape blog. The reason is this:

The scientific litmus test for the paper is the AR4 statement: “anthropogenic greenhouse gases have been responsible for “most” of the “unequivocal” warming of the Earth’s average global temperature over the second half of the 20th century”.

The climate experts with credibility in evaluating this statement are those scientists that are active in the area of detection and attribution. “Climate” scientists whose research areas is ecosystems, carbon cycle, economics, etc speak with no more authority on this subject than say Freeman Dyson.

I define the 20th century detection and attribution field to include those that create datasets, climate dynamicists that interpret the variability, radiative forcing, climate modeling, sensitivity analysis, feedback analysis. With this definition, 75% of the names on the list disappear. If you further eliminate people that create datasets but don’t interpret the datasets, you have less than 20% of the original list.

Such criticism does not mean that Anderegg’s study has no value to journalists, however. The database underpinning the research was actually created by James Prall, a computer systems programmer at the University of Toronto, who is listed as the second author on Anderegg’s paper. Using it is a bit tricky, however. The study links to a Web page containing links to the documents Anderegg et al. used to compile the names of 1,372 researchers considered in their study (which they then winnowed down to 908 by imposing a criterion that a researcher must have authored a minimum of twenty climate publications to be considered).

The list of “convinced” researchers included all contributors to the IPCC’s 2007 Working Group I report (which dealt with the science of climate change) as well as all signatories to four prominent scientific statements endorsing the IPCC conclusions. The list of “unconvinced” researchers included all signatories to twelve prominent statements criticizing the IPCC’s conclusions. It should be noted, however, that Prall’s database is actually quite a bit larger than the subset used for the Anderegg paper. It should also be noted that the page to which that paper provides a link does not include the lists that rank convinced and unconvinced researchers in terms of their “expertise” (number of papers published) or “prominence” (number of citations those papers have received). Those lists can be found elsewhere on Prall’s Web site, and they are perhaps the resources that journalists would find most useful. (Like the documents used for categorizing scientists as either convinced or unconvinced, however, the database used to compile each researchers publication and citation counts—Google Scholar versus the more traditionally accepted ISI Web of Science—has been criticized.)

Although there are problems with measuring a scientist’s expertise and prominence by the number of papers he or she has published and the number of times those papers have been cited, those metrics are generally considered to be reliable starting points for appraising a source. And that is were the real value of this database seems to lie—not for identifying who is convinced and unconvinced by the basic tenets of climate science, but rather for making first approximations of researchers’ overall credibility and contribution to their fields. But journalists need to conduct more thorough, secondary assessments of their own. In particular, although Prall’s database notes each scientist’s particular area of research and expertise, that information should be vetted and fleshed out through a careful evaluation of the scientist’s actual work.

Curtis Brainard is the editor of The Observatory, CJR's online critique of science and environment reporting. Follow him on Twitter @cbrainard.