It’s all the rage, the “granular” data made possible by Arbitron’s PPM technology. In practice, it enables the programmer (and the agency) to understand more and more about less and less.
To be sure, this data has a role. But the zeal with which it’s being promoted and, I would argue, over-used is worse than bad for radio’s future: It’s downright dangerous.
You need to fundamentally understand, as I explained in my PPM presentation at the NAB last year, that there are two essential kinds of listener behaviors that your stations must grapple with:
…where the goal is to keep listeners listening, to plug the so-called “leaky bucket.” And…
…where the goal is to invite listeners back to the station or to the station in the first place.
“Granular” or moment-by-moment analysis of your station’s content will help you keep listeners listening, to be sure. It will help you remove distractions to the core reason listeners are using you. It will help you arrange your necessary evils (i.e., spots) in such a way as you minimize audience loss. It will, conceivably, allow you to evaluate the performance of individual songs, if you assume that listeners are tuning out your station because they hate that song rather than because they just turned off their car (a big assumption, by the way). It will help us to understand what spots listeners will tolerate and what spots they won’t (which is completely different from what spots WORK and what don’t, but let’s leave that aside for now).
However, “granular” data will do nothing to help you with listener acquisition. You can plug the leaks all you want, but eventually you need to refill the bucket. Keeping me tuned in does nothing to attract me back. I will come back not because you have eliminated all negatives but because you possess special and unique positives. I will come back because the “brand” is bigger than the momentary experience I have with it when I’m with you.
God knows that Howard Stern’s show presents oceans of lulls punctuated by moments of sharp hilarity and sheer genius. Now it’s not possible to eliminate – or even dramatically reduce – those lulls. Yin needs yang and genius needs lulls. We listen for the peaks and we tolerate the lulls. Or we tune out during the lulls but tune back because we know a peak is on the way.
It’s vastly easier for “granular” analysis to find lulls than to find peaks. But anyone who thinks that eliminating lulls is the same as creating peaks has never created radio worth listening to. You see, peaks require innovation and risk-taking – by definition peaks are hit-and-miss. In other words, the very act of creating peaks also creates lulls.
How else to explain the dismal record of new TV pilots, all of which are thoroughly pre-tested with the same dial technology that many of you may use for your music. The producers are looking to maximize the highs, and in so doing they push the shows towards constant laugh lines. And laugh lines without realistic and complicated characters supporting them yield sameness, mediocrity, and leave the viewer with a sense of innocuous indifference.
Take the pretesting on the US version of “The Office,” which reportedly received the lowest scores in NBC TV history. I asked program creator Ricky Gervais about this a couple years ago. “Good,” he said. “Because when we tested the British version of ‘The Office’ it also received the lowest scores in the history of the BBC.” Both versions of the program went on to become hits, of course. Because their formula could not be fairly evaluated based on twisting a dial to the right like you do for all the other crap on TV.
“Granular” ratings evaluation is a form of reductionism. You can take something apart and evaluate all its components, but that doesn’t mean you can reassemble it better than it was. Ask any meth-head.
I’m in the research business so I know more than a little about splitting things into its tiny bits in order to learn more about them. However, in my perceptual work, my goal is always to read between the lines of this reductionist data so that the whole is bigger and better than the sum of its parts. It’s an issue that I’m keenly attuned to and is probably a big difference between me and others who do what I do.
So before you go off the deep end (and spend off the deep end) for new and ever-more reductionist way to split your station’s brand into the minutes that supposedly make it up, ponder these words from writer Jonah Lehrer:
Not everything benefits from being broken down into tiny pieces. Look at a Beethoven Symphony. If the music is reduced to wavelengths of vibrating air, we actually understand less about the music. The intangilble beauty, the visceral emotion, the entire reason we listen in the first place. All is lost when the sound is reduced to its most basic elemental details. In other words, reductionism can leave out a lot of reality.
Reductionism can leave out a lot of reality.
Don’t take my word for it, or Jonah’s. Listen to what this guy has to say:
Of what value is he who, in order to abbreviate the parts of those things which he professes to give complete knowledge, leaves out the greater part of the things of which the whole is composed? Oh, human stupidity! You don’t see that you are falling into the same error as one who strips a tree of its adornment of branches full of leaves, intermingled with fragrant flowers or fruit, in order to demonstrate the the tree is good for making planks.
Not my words. The words of a famed engineer, artist, sculptor, architect, athlete, scientist, anatomist, and radio industry consultant.
The words of Leonardo da Vinci.