Here’s a good example of how corrections can fail to fix misimpressions created by the original error. In this case, the misimpression is the premise of the story, which ran on the front of The Wall Street Journal’s Money & Investing section on Friday.
Here’s the headline:
Raters Fail to See Defaults Coming
History Shows Firms Rarely Anticipate Sovereign Failures
What’s the Journal’s main piece of evidence (emphasis mine)?
Out of the 15 government defaults S&P has tracked since 1975, for instance, the firm rated 12 of the countries single-B or higher one year before the event. Yet S&P says a single-B rating has just a 2% average chance of default within a year. Put another way, S&P drastically underestimated one-year default risk in 80% of those cases.
First, it’s strange that the Journal says that raters fail to see sovereign defaults coming when it talks primarily about single B ratings, which are well into junk territory. Junk bonds by definition imply a much higher risk of default than investment-grade bonds.
But the Journal erred by misconstruing what S&P actually said and basing its whole story on it. Here’s its correction from Monday’s paper (emphasis):
Standard & Poor’s Corp. says that historically, a single-B rating on sovereign debt has had just a 2% average default rate within a year. A Friday Money & Investing article about sovereign-debt ratings incorrectly stated that S&P says single-B ratings have a 2% chance of defaulting within a year.
It took me a few minutes to wrap my head around the difference here, and that’s after a very alert reader pointed it out to me (thanks, very alert reader!). The problem is that S&P doesn’t predict that individual single-B countries have a 2 percent chance of defaulting within a year. It instead says just 2 percent of countries have defaulted within a year of having a single-B. In other words, the Journal’s entire story is negated by this one subtle change of meaning.
In fact the real numbers show the opposite of what the Journal told us. The 2 percent default number shows that countries rated single-B rarely default in the near term.
Worse, the paper just wrote through its story online, dropping the correction at the bottom of the page. Here’s how it reads now:
Out of the 15 government defaults S&P has tracked since 1975, for instance, the firm rated 12 of the countries single-B or higher one year before the event. Yet S&P says historically, a single-B rating on sovereign debt has had just a 2% average default rate within a year. Put another way, S&P drastically underestimated one-year default risk in 80% of those cases
But this is still not right. The S&P numbers, if the Journal correction is itself correct, were historical statistics, not forecasts of a 2 percent likelihood of default. Even if they were predictions, S&P would by definition be correct in a 2 percent chance forecast, since 2 percent of countries so rated have defaulted.
The Journal’s accompanying graphic is also misleading. It reads:
Missing the Boat
A year before the following government-debt defaults, credit ratings indicated a low chance of such an event.
Junk bond ratings indicate an elevated risk of default. I’m no fan of the credit raters and you could make a case that these soon-to-default countries should have been rated even lower than single-B, like, say, . But that’s not the case Journal made.
Unless you happened to read the corrections page three days after the story ran and scratched your head long enough to figure out what the correction meant, you don’t know that the Journal got the whole story wrong. The correction implies that it just parsed some words incorrectly.
You can see how easy it would have been to make this mistake. We all make embarrassing ones, but the Journal’s correction isn’t good enough. Readers need to know when errors compromise the basic thesis of the story.