Miami has deep ties to the Caribbean. So when a devastating earthquake struck Haiti on January 12, The Miami Herald mobilized for one of its biggest stories of the year. Reporters were on a flight to the Dominican Republic that night and filing from Haiti the next day. The sense of mission extended to the paper’s Web site, where a special Haiti channel pulled together print coverage as well as video pieces, photo archives, and Twitter feeds from correspondents. Multimedia editor Rick Hirsch thought his site could open a window onto the tragedy for audiences around the world. “Haiti really is a local story for us,” he explains.
According to the Herald’s server logs, his hunch was right: traffic leapt by more than a third in January, to 35 million page views, the only time it broke 30 million in the six months before or after. Nearly 5.9 million different people visited the site that month, another high-water mark for the year.
But not according to comScore, the media measurement firm that, along with rival Nielsen, purports to be the objective word on what Americans do online. ComScore recorded fewer than 9 million page views for the Herald, and barely 1.6 million “unique visitors.” Even more distressing, comScore—whose clients include major advertisers and ad agencies—had the paper’s page views actually declining by 40 percent the month of the earthquake. “Those trends just don’t make sense,” insists Hirsch, whose newspaper subscribes to comScore as well. “We know our traffic went through the roof.”
The open secret of online publishing is that such wild discrepancies are routine. Whether you ask The Washington Post or a stand-alone site like Talking Points Memo (TPM), you’ll hear the same refrain: publishers looking at their own server data (via software like Omniture or Google Analytics) always see much more traffic than is reported by Nielsen and comScore, both of which extrapolate a site’s audience by tracking a small “panel” of Web users, just as Nielsen does for its famous TV ratings.
“The panel-based numbers are atrocious,” says Kourosh Karimkhany, TPM’s chief operating officer, pointing out that Nielsen and comScore have a hard time measuring workplace Web surfing. “But as long as they’re equally inaccurate for our competitors, it’s okay. It’s something we live with.”
For that matter, the two ratings firms frequently disagree with each other. In May, for example, Gannett’s various properties commanded 37.5 million unique visitors according to comScore, but only 25.6 million according to Nielsen. ComScore gave Washingtonpost.com an audience of 17 million people that month, but Nielsen recorded fewer than 10 million. And so on.
It’s fair to ask how business gets done amid such uncertainty. Who should a site’s sponsors—or for that matter, its journalists—believe?
Publishers say the cacophony scares away advertisers, a conclusion supported by a 2009 McKinsey & Company study commissioned by the Internet Advertising Bureau. Executives from Newser and MLB.com told The Wall Street Journal’s “Numbers Guy” columnist last February that undercounting by Nielsen and comScore keeps them off the radar of major advertisers, and hurts their bottom lines.
This messy situation has yielded any number of white papers and task forces; reform efforts are currently under way at the IAB, the Media Ratings Council, and the Newspaper Association of America, among others. Last year CBS, NBC, and Disney led the formation of a “Coalition for Innovative Media Measurement,” that seeks to establish a cross-platform standard to gauge total media usage.
In response, comScore has unveiled a new “hybrid” approach that claims to mash up panel results with server-side data for a more accurate count. This is a little ironic, since the raison d’être for the user panels is that server data can’t be trusted because it counts computers, not people, who may visit a site from more than one machine. Whatever the technical merits, one comparison found the “hybrid” counts boost audiences by 30 percent on average; some sites, like The Onion, saw traffic nearly triple. Nielsen has a similar system in the works.
This article makes me think that some open cookie standard would help. For one thing, the cookie would be regulated, which would be good for users. But it could be a source of clout for the social networks as well. A news viewer may access the same site from several computers, but one thing those computers have in common is that cookie they get from Twitter or Facebook or google.
Cookies.
#1 Posted by Http://mostmodernist.com, CJR on Wed 8 Sep 2010 at 11:58 AM
This article makes me think that some open cookie standard would help.
For one thing, the cookie would be regulated, which would be good for
users. But it could be a source of clout for the social networks as
well. A news viewer may access the same site from several computers,
but one thing those computers have in common is that cookie they get
from Twitter or Facebook or google.
Cookies.
http:twitter.com/mostmodernist
#2 Posted by mostmodernist, CJR on Wed 8 Sep 2010 at 12:05 PM
Ratings and measurement systems generally give me an upset stomach.
As the article implies, Nielsen's monopoly over television ratings has raised questions for years about how numbers are generated, tracked, manipulated, and published. I've spent more than a few nights looking at TV ratings, week-by-week, year-over-year or what-have-you, and my view is such that complacency with a crooked system is just as bad as being in promotion of said system. Set top box data? Some people have legitimate concerns about it, but it's a shame others refuse it just because the new technology would force market researchers to be a little better at their job.
#3 Posted by Aaron B., CJR on Wed 8 Sep 2010 at 12:26 PM
"But Nielsen’s numbers are better than nothing at all, and that’s what radio or TV broadcasting offers: no way to detect whether 5,000 people tuned in, or 5 million."
In the age of digital TV this makes little sense to me. If the cable and sat TV companies got together, they could tally an actual count of viewers. They could also indicate how many people actually watch commercials... (not many)... which is probably why they don't report this stuff. I imagine they are doing this kind of research anyway - for their own internal optimization purposes. But the idea that Nielsen is the only option out there seems wrong to me. There are plenty of ways to "detect," either by tallying actual numbers or using a statistically significant sample size. It just seems that there's no interest in doing it.... at least not for public consumption.
#4 Posted by ms, CJR on Wed 8 Sep 2010 at 02:25 PM
Another factor that's totally disregarded by the measurement services: the percentage of hits generated by non-humans (i.e. bots, crawlers, etc. etc.) According to TownNews.com, which hosts more than 1,000 newspaper websites, almost 70 percent of the traffic it tracked in January 2010 was generated by spiders, bots and other web crawling creatures. Not a pair of eyes among em.....
#5 Posted by Chucolo, CJR on Wed 8 Sep 2010 at 03:38 PM
Yes, some may count non-human (robot & spider) traffic. But many publishers using server- or client-side counting use (1) the IAB Robot/Spider list and (2) Dynamic Filtering to detect non-human traffic so they get a clean look at their real traffic. A CEO running a Digital Company has material motivation to understand this difference.
The Council for Research Excellence recently published a paper on Set Top Data and you should be able to find that at http://researchexcellence.com/committees/settopbox_committee.php
Nielsen, Arbitron and others are audited by the Media Ratings Council to provide the transparency and accountability spoken of. You are absolutely correct; the state of Digital Measurement is not what we need it to be today. And of more concern should be the challenge of keeping pace in a world of proliferating mobile apps.
#6 Posted by Dan Murphy, CJR on Tue 14 Sep 2010 at 07:52 AM
If editors are looking for useful data on which to base editorial decisions, I'd recommend our Newstogram platform. It goes beyond telling you what stories are popular to showing you the topics and entities that are trending across multiple stories.
#7 Posted by Neil Budde, CJR on Mon 20 Sep 2010 at 01:03 PM
I havden't read all the commenta, so this may have already been pointed out, but third party companies like Scarborough and Gallup determine the readership number, and ABC certifies the number of copies sold.
#8 Posted by peter Sullivan , CJR on Tue 5 Oct 2010 at 03:33 PM
Interesting report. I imagine it is useful for journalists and other media professionals working in large news-based media companies.
The point made towards the end of the report on the difference between big media and small media with regards to the relative obscurity of small media in most of the third party produced metrics needs to be properly explained.
Another tricky problem is reconciling the focus on journalism with the reality of the non-journalistic or quasi-journalistic media of the internet. That is, advertisers are interested in what consumers are interested in and consumers are not always interested in news.
Lastly, a minor point, for the niche media who service smaller audiences and who may integrate other features ('brand extensions') into their sites beyond content-based 'news' (forums, video, classifieds, etc.).
#9 Posted by Glen Fuller, CJR on Wed 26 Jan 2011 at 08:00 PM