JR: As we discussed, these measures are coarse. But to lose a lot of money, hospitals will have to do badly in multiple domains. If they’re just a laggard in one area, such as patient safety, but above average in outcomes or patient surveys, it will balance out.
TL: Have consumers been using any of these measures—patient safety measures, satisfaction scores, and the so-called process measures like making sure a patient gets an antibiotic one hour before surgery?
JR: Most evaluations show that consumers don’t use these data in selecting hospitals.
TL: Why don’t they use them?
JR: A lot of things that put you in the hospital are immediate problems that don’t lend themselves to comparison shopping, and consumers are directed to hospitals based on their doctors’ preferences, their insurance coverage, geographic convenience or their general sense of a hospital’s reputation.
TL: Are there any measures being used by consumers?
JR: That’s a great question. I don’t think a majority of consumers use any of them.
TL: That brings us around to our fellow journalists. How should they use these measures, if at all?
JR: Journalists can use them, but they should do so carefully. No one measure captures the overall hospital quality, and you should be careful about your comparisons. With the right context, I don’t think there’s anything wrong with using the patient safety indicators in stories.
TL: Can you give me an example where a news organization used them properly?
JR: The Dallas Morning News is a prime example of how to responsibly use the measures its coverage of Parkland Memorial Hospital. Texas has good discharge data, and the paper did its own analysis of patient safety measures, comparing Parkland to other large Texas hospitals. Patient safety measures were just one piece of the coverage. They used lawsuits and government inspection reports. So, bottom line: Patient safety measures can be a good piece of a larger mix, but they themselves are not solely definitive.
TL: What should journalists not do?
JR: They should not use them without talking to the hospitals, and they should be very cautious in comparing teaching hospitals to community hospitals. They should also make clear that they should not imply the safety ratings cover all the patients in a hospital. They cover only certain types of cases and accidents. They should also not assume that a hospital rated better than average by Medicare is in fact superior or trouble-free—those could be underreporting problems.
TL: So should they construct any rankings of hospitals based on the measures?
JR: It depends on the context. I wouldn’t do the ten worst or the ten best hospitals.
TL: How should reporters use the mortality measures, considering that the latest research reported in Health Affairs shows they haven’t reduced mortality?
JR: Just because publication of data hasn’t led to improvements doesn’t mean the data isn’t accurate. The mortality data—rates of people dying within 30 days of discharge—is pretty good.
TL: Do you have any other advice for journalists?
JR: As complex as these measures are, they are great ways to get conversations started with a hospital executive you are interviewing. Also, when hospital executives say Medicare data is not accurate, press them to produce their own internal data to prove their data are more accurate than Medicare’s.
TL: Think for a moment about reporters just starting to cover hospital metrics. What should they do?
JR: One thing they should do is drill down into the spreadsheets from Hospital Compare, because there’s a lot more data than CMS puts on its website. For example, CMS publishes only the percentage of patients who rave about their hospitals, but the full data include the percentage of patients who panned their experience.
TL: Anything else?
JR: When you’re working with patient experience data, you want to be very careful in comparing patient satisfaction in very different geographic areas. Patients in some areas like New York, Miami, and New Jersey are more likely to voice their complaints more freely, and hospitals in those areas are more likely to have lower ratings than hospitals in South Dakota.
TL: Should we be using the so-called process measures—like the portion of pneumonia patients receiving a flu vaccine, or the portion of heart attack patients receiving discharge instructions?
JR: I don’t think the process measures make for great stories because they represent the minimal expectation for basic care. Most hospitals are getting a score of 93 or 94 or even 99 percent. It’s not compelling to do a story that says a hospital has a three percent lower compliance on a measure than the average. Very few places are showing up as outlying poor performers.
TL: Then they may not be compelling for consumers either? Would you agree or not?