Disks of never-before-released data from the Department of Education landed with a befuddling thud in New York City’s newsrooms at the end of February. The swarm of spreadsheets had promised to provide a single ranking of 18,000 teachers (by name!) from zero to 99 based on students’ standardized test scores.

A bonanza for education reporters, right? Time to celebrate? Well, not exactly; not for me, anyway.

My intrepid journalism students wondered why I didn’t seem to share their enthusiasm for the data. Wasn’t I the same teacher who became semi-deranged when they turned in stories without any quantitative evidence? Think of the stories to be done, the fun graphics to design.

Here were not only reams of data, but hot data—from the center of a national controversy over how teachers should be evaluated. Adding to the buildup, the reports had been locked away for more than a year while a city judge refereed a high-octane legal fracas between the teachers union and the city over whether to release them. Nearly a dozen news organizations had become either witting or unwitting pawns in this dispute when they filed Freedom of Information requests for the data’s release.

“Isn’t it our job to bring information into the light, and let the public judge for themselves?” one student asked me. She had learned her lessons well.

Last year, I was certain what my killjoy answer would be: Just because you have data doesn’t mean it is always right to publish it—especially if you know the numbers are no good. And these numbers do have huge problems. Everyone from economists, to educators, to knowledgeable city education reporters know that the arcane algorithms that generated the teacher-rating numbers are as statistically flawed as they are politically fraught.

The complex formulas are meant to measure how much value a teacher contributes to a student’s learning growth (or lack of growth) over time. It would be useful if they actually did. But the data are riddled with mistakes, useless sample sizes, flawed measuring tools, and cavernous margins of error. The Department of Education says that a math teacher’s ranking could be off by 35 percent; an English teacher’s by 53 percent. That means a reading teacher with a ho-hum 35 could either be as horrid as a 1 or as awesome as an 86—take your pick. What election survey with these kinds of gaping margins would be published in the papers?

Most damning—and most often ignored in the coverage—is that the sole basis for these ratings are old student tests that have since been discredited by the New York State Board of Regents. The 2007-2010 scores used for these teacher rankings were inflated, the Regents determined. The Department of Education had lowered the pass score so far that the tests had become far too easy. So not only were the algorithms suspect, but the numbers fed into them were flawed. News organizations that publish them next to teachers’ names run the risk of not only knowingly misleading the public, but also of becoming entangled in the political web surrounding teacher evaluations, which extends from the mayor’s office, to the state house, to unions, philanthropy board rooms, and to the White House.

And yet, nearly every city news organization went ahead and printed them anyway.

To my mind, all reasons not to publish still exist. They are still true. But in the last month, I’ve come around to an opposite, perhaps more cynical, conclusion about the virtues of making them public. Publishing them, it seems to me, has had an odd, clarifying effect. Releasing the data to public scrutiny, alongside context and caveats, has exposed just how flawed they really are.

Apparently, the public has received that message. A Quinnipiac poll released in mid-March showed that 58 percent of the respondents approved of releasing the teacher data reports, while at the same time 46 percent believed they were flawed. The more the public sees, the less enamored they are (Go, Journalism!).

Perhaps that was what philanthropist Bill Gates feared in February when he lectured the media days before the data were released. In a February 23 op ed in The New York Times, “Shame is not the Solution,” Gates warned news organizations not to humiliate teachers by publishing their names next to their value-added rankings.

What was up? Gates is usually bullish on the use of test scores to evaluate teachers. Concern for their feelings had rarely been a top concern. Yet, he argued, correctly, test scores were not “a sensitive enough measure to gauge effective teaching” all by themselves.

LynNell Hancock is the H. Gordon Garbedian Professor of Journalism at Columbia, and director of the school's Spencer Fellowship in Education Journalism.