Does this mean that finally, after fifteen years of mounting chaos in online metrics, a single standard will take hold? That something like the relative clarity of TV ratings will be achieved? Don’t bet on it. No trade group or task force can address the fundamental problem—if it is a problem—of counting online audiences: too much information.
The “banner ad” was standardized by the site HotWired in late 1994. The next step was obvious: HotWired began to report what share of people clicked on each banner, i.e. the “click-through rate,” giving advertisers a new way to think about the impact of their campaigns.
That origin story goes a long way toward explaining the informational mayhem that afflicts online media today. Every visit to, say, Salon or Nytimes.com yields a blizzard of things to measure and count—not just “click-throughs” but “usage intensity,” “engagement time,” “interaction rates,” and of course “page views” and “unique visitors,” to name a few. How deep into the site do visitors go? How long to do they stay? Match any numerator to any denominator to make a new metric.
The statistics accumulate not only at the sites you visit, but also in the servers of every advertiser or “content partner” whose material loads on the same Web page. Any of these servers can attach a “cookie” to your browser to recognize when you visit other sites in the same editorial or advertising networks. Data at each tier can be collected and analyzed (thus, measurement firms like Quantcast and Hitwise pull traffic figures from ISPs to come up with their own audience figures).
The Web has been hailed as the most measurable medium ever, and it lives up to the hype. The mistake was to assume that everyone measuring everything would produce clarity. On the contrary, clear media standards emerge where there’s a shortage of real data about audiences.
Nothing illustrates this better than Nielsen’s TV ratings system, which has enjoyed a sixty-year reign despite persistent doubts about its methodology. The company has responded to some critics over the years, for instance by increasing the number of Nielsen households and relying less on error-prone viewer “diaries.” It can’t do much about the most serious charge, that the panel is not a truly random sample and thus fails a basic statistical requirement.
But Nielsen’s numbers are better than nothing at all, and that’s what radio or TV broadcasting offers: no way to detect whether 5,000 people tuned in, or 5 million. With nothing to go on, accuracy matters less than consensus—having an agreed-upon count, however flawed, as long as it skews all networks equally.
Print publications have more hard data—a newspaper knows how many copies it distributes, though not how many people actually read them. So publishers rely on third-party auditors like the Audit Bureau of Circulations to certify the squishy “pass-along” multiples that magically transform a circulation of 192,000 at The Miami Herald, for instance, into a total “readership” of 534,000.
By comparison, computer networks are a paradise of audience surveillance. Why expect media outlets, agencies, and advertisers to abide by the gospel of one ratings firm, to only talk about one number, with so much lovely data pouring in from so many sources? “People use whatever numbers look good that month. It gives publishers some flexibility,” says Kate Downey, director of “audience analytics” at The Wall Street Journal, which subscribes to Nielsen, comScore, Omniture, and HitWise. “I think if everybody had the same numbers, we would hate that even more.”
There’s another reason for the lack of consensus about audiences on the Web: the numbers don’t matter as much to advertisers. As any Mad Men fan knows, Nielsen’s TV ratings are a kind of currency on Madison Avenue. An extra point or two of penetration translates into millions of dollars over a season. That’s why plot lines peak and the news gets trashier during “Sweeps Week,” when local ad rates are set.