Radio Ratings are in a Heap of Trouble
If you can’t trust the Nielsen ratings in America’s largest radio market, then where can you trust them?
A few days ago, Nielsen delayed the release of May ratings for Los Angeles when “inconsistencies” resulted in the need for more “quality control reviews.”
Since then, the problem has only deepened. From the LA Times:
Nielsen has been attempting to verify that individuals who participated in its sample audience panel were truly independent and did not have ties to any radio stations or radio personalities that were being measured. The ratings agency said Wednesday that it needed to “remove a household from the panel for not meeting our quality standards.” Radio industry insiders said such actions were unprecedented. One person said the move marked the first time in more than a decade that the release of radio ratings in Los Angeles had been postponed due to such concerns.
One single household has this scale of impact? In LA?!
So what’s really going on here?
The rumor in the radio industry is that this particular household had media ties and the usage recorded was significant enough to skew the overall ratings for the market.
Indeed, how could it be otherwise? Because if one household mattered so little to the overall results, then why would Nielsen bother to hold back the ratings for the entire market in the first place? The very withholding of the data implies that one household has the potential to skew the whole in a significant way.
Ponder that for a moment:
One household in Los Angeles has the potential to meaningfully skew the ratings for America’s largest market!
And if this one household can do that in LA, then can’t any one household in any market do it any time?
What does this say about the validity of the metrics? Why is one goofy household with outlandish listening any more goofy than any other goofy household with outlandish listening simply because it is affiliated with a local media company?
Do we see this same impact for online radio metrics?
I set out to find the answer, and the results will surprise you.
With the permission of Pandora, I gathered their webcast metrics listenership data for April in the LA MSA among all persons M-F 6A – 7P.
Pandora’s total AAS (Average Active Sessions) in LA during this period was 119,111.
What happens when we pull out the single heaviest listener in the audience? AAS declines almost not at all to 119,102.
What happens when we pull out the heaviest 10,000 listeners from the audience? AAS declines to 109,072.
In other words, you barely dent the overall online radio listening averages when you take out the heaviest listening 1, 10, 100, or even 10,000 listeners!
What does it mean when one household corrupts the PPM data for America’s largest market but 10,000 people can’t do it for online radio?
Update: Apparently there are two households involved here. Not that that changes my point in the least. Indeed, if not for the first problem household it’s unlikely Nielsen would have ever found the second. By the way, do you suppose these are all of them? Do you suppose that in all the PPM markets in America there aren’t media-related squatters hiding their PPM devices right this very minute? But even if there aren’t, now the lesson is clear: One or two households in America’s largest market can can quite unfairly shape millions of dollars in buying decisions, all built on a house of cards paid for by broadcasters and brought to you by Nielsen. Well done, Nielsen. Well done, radio. Well done, advertisers. Let’s make your clients proud!
#losangeles #pandora #aas #radioratings #radio #markramsey #Media #measurement #error #radioindustry #ratings #nielsen #triton #markramseymedia #ppm