The climate experts with credibility in evaluating this statement are those scientists that are active in the area of detection and attribution. “Climate” scientists whose research areas is ecosystems, carbon cycle, economics, etc speak with no more authority on this subject than say Freeman Dyson.

I define the 20th century detection and attribution field to include those that create datasets, climate dynamicists that interpret the variability, radiative forcing, climate modeling, sensitivity analysis, feedback analysis. With this definition, 75% of the names on the list disappear. If you further eliminate people that create datasets but don’t interpret the datasets, you have less than 20% of the original list.

Such criticism does not mean that Anderegg’s study has no value to journalists, however. The database underpinning the research was actually created by James Prall, a computer systems programmer at the University of Toronto, who is listed as the second author on Anderegg’s paper. Using it is a bit tricky, however. The study links to a Web page containing links to the documents Anderegg et al. used to compile the names of 1,372 researchers considered in their study (which they then winnowed down to 908 by imposing a criterion that a researcher must have authored a minimum of twenty climate publications to be considered).

The list of “convinced” researchers included all contributors to the IPCC’s 2007 Working Group I report (which dealt with the science of climate change) as well as all signatories to four prominent scientific statements endorsing the IPCC conclusions. The list of “unconvinced” researchers included all signatories to twelve prominent statements criticizing the IPCC’s conclusions. It should be noted, however, that Prall’s database is actually quite a bit larger than the subset used for the Anderegg paper. It should also be noted that the page to which that paper provides a link does not include the lists that rank convinced and unconvinced researchers in terms of their “expertise” (number of papers published) or “prominence” (number of citations those papers have received). Those lists can be found elsewhere on Prall’s Web site, and they are perhaps the resources that journalists would find most useful. (Like the documents used for categorizing scientists as either convinced or unconvinced, however, the database used to compile each researchers publication and citation counts—Google Scholar versus the more traditionally accepted ISI Web of Science—has been criticized.)

Although there are problems with measuring a scientist’s expertise and prominence by the number of papers he or she has published and the number of times those papers have been cited, those metrics are generally considered to be reliable starting points for appraising a source. And that is were the real value of this database seems to lie—not for identifying who is convinced and unconvinced by the basic tenets of climate science, but rather for making first approximations of researchers’ overall credibility and contribution to their fields. But journalists need to conduct more thorough, secondary assessments of their own. In particular, although Prall’s database notes each scientist’s particular area of research and expertise, that information should be vetted and fleshed out through a careful evaluation of the scientist’s actual work.

This can be done by actually reading researchers’ papers, by asking other scientists to evaluate a potential source, and by consulting other databases such as the ISI Web and EurekAlert!’s guide to science sources. Additionally, on Friday, the American Geophysical Union announced that it is “establishing a new service in order to better address journalists’ needs for accurate, timely information about climate science.” So far, more than 115 “climate specialists” have signed up with the geophysical union to serve as sources for journalists. “The new referral service will receive journalists’ questions and other queries via emails or phone calls to AGU’s press office staff, who will then pass queries along quickly to appropriate scientist-volunteers,” according to the press release.

So how should journalists’ use Anderegg’s paper and the underlying database (which, it should be mentioned, has been around for over a year and thus predates the Anderegg et al. paper)? The simple answer, in my opinion, is: just like they would use a site like Wikipedia—as a useful starting point, to be treated warily, for a much more thorough evaluation of researchers credentials. After all, one thing is absolutely certain. Expertise does matter.

If you'd like to get email from CJR writers and editors, add your email address to our newsletter roll and we'll be in touch.

Curtis Brainard is the editor of The Observatory, CJR's online critique of science and environment reporting. Follow him on Twitter @cbrainard.