That question was the topic of The Scientist’s cover story in the July issue. The article, by Kerry Grens, focuses exclusively on investments in the life sciences (biology, biotech, health, etc.) at universities and public labs, such the National Institutes of Health. She cites a recent Congressional study that found high economic returns from investments in the institutes’ research. “In other words,” she writes, “besides the obvious health benefits of NIH funding for biomedical research, it also saves Americans money by lengthening their lifespan and improving healthcare.”

Grens finds a few other examples and studies that support the idea that science is a good investment. But if that is so, she asks, why not throw even larger sums of money into research and development? Grens cites an interesting 1991 study by Edwin Mansfield, a now-deceased economist at the University of Pennsylvania. Mansfield found that 28 percent was the magic number, “meaning each dollar put into research would yield $1.28 in social and economics benefits within about a decade.” In reality, however, that number seems almost arbitrary. “For every estimate of the returns on scientific investment, there are many reasons why that estimate could be wrong,” Grens writes. “The bottom line: No one knows what the actual returns of science are.”

Two of the “reasons” that Grens refers to are the long lag time between academic research and the commercialization of a product based on that research, and the difficulty associated with identifying the economic benefits of research (which can be myriad and subtle). Grens has a couple of sources, including John Marburger, President Bush’s science adviser, saying that there are simply no metrics for calculating the expected return on scientific investment, so policymakers rarely consider economics when making decisions about funding. She also has a very good quote from another source saying that it might harm the scientific process-which tends to be long, circuitous, and fraught with many setbacks-if researchers’ ability to obtain grants depended on demonstrating “short-term, easily measurable impact.”

With so many caveats to scientific inquiry, it is no wonder that how to maximize the output and efficiency of scientific innovation remains a mystery. Still, readers must give credit to Wulf and Grens for attempting to flesh out such difficult questions about the nature of research, public policy, and commercialization. The media’s work, however, is not done. Wulf’s arguments need elaboration and The Scientist’s article does not address any non-life sciences products, such as software, that Wulf would like to promote. Following Dean’s and Gren’s lead, it would be nice to see more of these wide-angle articles about the scientific process.

If you'd like to get email from CJR writers and editors, add your email address to our newsletter roll and we'll be in touch.

Curtis Brainard is the editor of The Observatory, CJR's online critique of science and environment reporting. Follow him on Twitter @cbrainard.