I was optimistic that the publication of Peggy Orenstein’s fabulous New York Times Magazine piece on pinkwashing and the dysfunction of breast cancer culture in April would change the conversation and diminish misleading coverage of the disease.
Apparently not. Wednesday morning, Slate urged its readers to “Reconsider the Mammogram” (with no apparent apologies to David Foster Wallace). The piece, about a new algorithm that uses risk factors to calculate when a woman should get a mammogram, perpetuates faulty logic and oft-repeated misconceptions about breast cancer screening. It also claims, in the subhead, that this new modeling technique will “fix” the problems with mammography. The assumption underlying the piece is that, despite its imperfections, mammography is a “life-saving screening method” that just needs some math to be even better.
In fact, changing how often women are screened won’t solve any of the issues that make mammography a problematic diagnostic tool.
Let’s begin at the beginning. Emily Herrmann, an undergrad at the University of Wisconsin-Madison who works in one of the algorithm developer’s labs, writes:
A recent cover story in the New York Times Magazine made a convincing case against the mammogram. The author’s main criticism was that mammograms result in many false-positives, which other research has confirmed. Women get treated for cancers they don’t have, or cancers that are noninvasive but which doctors at the moment can’t distinguish from the malignant ones. All of this leads to a lot of wasted money, stress, and distrust of the system as a whole.
She’s referencing Orenstein as well as talking about ductal carcinoma in situ, or cancer cells that haven’t—and may never—spread beyond a woman’s milk ducts. Currently, DCIS is treated as aggressively as invasive cancer, as the writer says. But just last week, the National Cancer Institute recommended changing the name of DCIS to remove the word “carcinoma” in an effort to encourage less overtreatment, and more waiting-and-seeing. The recommendations propose rethinking how doctors and patients make decisions about what screenings show. There is no mention of the widely covered recommendations in Herrmann’s piece.
Instead, she writes that the risk-factor algorithm would suggest that women with a high cancer risk get screened while decreasing mammograms for lower-risk patients:
By personalizing mammography decisions, we can improve the quality and length of life, yet reduce the overall number of mammograms. The model could potentially help save numerous high-risk women while preventing undue harm to the rest of the public.
But the issue isn’t overuse of mammograms; it’s the inadequacy of the technology, and the fact that, as Orenstein and others have pointed out, early detection doesn’t decrease breast cancer mortality—survival is more often based on the characteristics of a woman’s cancer than on how early it was discovered. That is, “personalizing mammography decisions” is less likely to “improve the quality and length of life” than it is to continue subjecting women to all the false positives and overtreatment already associated with the imaging. Women with aggressive cancers will continue to die from them. And mammograms will continue to miss inflammatory breast cancer and tumors in women with dense breast tissue.
Mammography is the best broadly accessible screening tool currently available, and making it better isn’t a terrible idea. But none of the caveats or complexities of the ongoing debates over its efficacy enter Slate’s discussion about the modeling technique. Slate’s tendency toward counterintuitive analyses has long made it a must-read, but contrarianism only works if the argument addresses the entire story. In this piece, it didn’t.