|
Radio ratings
- why is there such a problem?
Over recent weeks expressions of dissatisfaction
with Arbitron's Portable People Meter (PPM) ratings has grown
to the extent of four broadcasters writing to the company
to ask for immediate action over what they perceive as sampling
problems that have particularly affected the figures for black
and Spanish language stations.
Had there been known inaccuracies but ones that applied generally,
we suspect the broadcasters and advertisers would have lived
with that but now big money is at stake for some broadcasters.
The system, however, has gained Media Rating Council (MRC)
accreditation in Houston, where it first became currency,
and Arbitron has been insisting that its system is up to scratch.
So, why the fuss? And more to the point are there any insuperable
problems or just ones of finance?
The first thing to be clear about is what
ratings measure and to be clear that they area a quantitative
rather than qualitative tool.
For qualitative information people can be asked detailed questions
but this is time consuming and costly and not of particular
interest to advertisers. In as far as it is of interest for
services not funded directly by adverts - such as subscription
services or public radio - the "market", to use
the term in a broad sense gives some clues. For public radio,
shows that attract devotion, even if not the largest audience,
draw support in terms of pledges and for subscription services
on satellite radio setting up an easy to use feedback system
can give an idea.
So what we are talking about in the end is ears in the presence
of audio, whether those easrs are paying attention, half-listening
or indeed not listening at all.
Diaries will probably over-estimate to a degree those programmes
where some of the diary keepers have a keen interest and under-estimate
the programming they jump in and out of.
Electronic metering will correct this - but not flag the fact
that some programmes are listened to and others are just on
in the background - and we would expect them to increase the
ratings for those kinds of shows that people often dip in
and out of such as news and sports (except at times when there
is a particular game on).
How can the ratings
be made as accurate as possible?
If we are talking about diaries, the essentials
are a well-designed system to elicit responses that are correct
but do not prompt people to note specific programmes or stations
allied with a suitably large number of diary keepers and proper
demographic balance within that number.
There will always, however, be a reliance on human memory -
it is not believable that all diary keepers conscientiously
note all their listening all the time - and this will introduce
distortions. That has been known all the time but there was
no alternative so the distortions were accepted, other humans
in advertising agencies made their own mental allowances for
them, and business was done. But there was always a pressure
to minimize the human element using technology: Hence the pressure
for electronic metering.
If we are talking about electronic metering the first priority
is technology that accurately picks up what audio is being listened
to: Whether it does this through embedded codes or audio matching
is irrelevant so long as the record is accurate. Once technology
has been tried and tested this should never be an issue.
There are significant issues, however, in terms of the people
who carry the devices: The human element may have been reduced
but it hasn't been eliminated.
If the device carriers switch them off for any reason (or do
not keep them charged up with the same effect) their information
is degraded albeit we would expect the devices themselves to
keep a record of when last "On" and when they went
"Off" that with a suitable programme would allow automated
allowance to be made as required for any such periods.
Then there is the question of numbers. If you can't get enough
people to carry the devices, the results will again be degraded.
And finally there is the issue of demographic mix: This requires
an analysis of the demographics or the market and matching to
this the demographics of those who will carry the devices.
Demographic mix
Demographic mix, it seems to us is likely to be the
main area where a problem can develop since some groups
can be expected to be more willing to carry the devices
than others and this will affect results much more than
a general but evenly distributed problem in recruiting
people to carry metering devices.
Some of the problems can presumably be overcome using
incentives providing the cost of these is outweighed
by the demographic benefits that accrue, not forgetting
that if one group gets more incentives than another
there may well be a knock-on dissatisfaction amongst
the less rewarded groups that may affect their willingness
to get involved.
Ultimately therefore it's a bottom line matter. To get
accurate results means getting enough people with a
suitable demographic make-up within the group recruited.
Fewer people are likely to mean inaccuracies but it
will cost more to increase the numbers.
As for stations or formats that come out better or worse
with electronic metering, if the results can be shown
to be accurate then it's hard lines because the advertisers
want this. If they can be show to be inaccurate there
is a problem for both the broadcasters and the ratings
supplier because they're not much use to anybody and
not worth paying for or having.
.
If there is a monopoly then the
ways forward are fairly limited -they boil down to not being
rated or paying up in the last analysis and there is no competition
to set the rate.
For that reason alone, Clear Channel's initiative in calling
for competitive tenders was a sound move and we would suggest
the broadcasters would have probably be in a better business
position had they committed development money and some contracts
to a competing system - maybe using both systems in a limited
number of markets and bearing the costs of double ratings in
one or two of them in the knowledge that the market would then
give both ratings suppliers a reason to provide the best value
service.
That, however, would seem to be ruled out by the decisions made
by all the large broadcasters to go with Arbitron's PPM, unless
of course some of those complaining are preparing to jump ship
in some markets. If they aren't they can be as unhappy as they
like but Arbitron has them by the genitals when it comes to
squeezing.
We therefore think that the idea of supporting a second supplier
in some markets - it already happens with the diary system with
Eastlan competing with Arbitron in the US for small market ratings
- is worth some very serious thought by both broadcasters and
advertisers.
If a second supplier exists using different technology we would
expect ratings costs to potentially increas a little in the
short term but be kept down in the longer one.
What we would envisage would be a market-by-market competition
for ratings - for network ratings the advertisers and broadcasters
should jointly insist that they are purchasing local ratings
that they can aggregate themselves if they wish to, thus allowing
both suppliers to provide network figures that can then be combined
for an overall picture if they so wish or just provide required
local numbers.
In additon we would see it as very valuable if in a few markets
contracts - not necessarily the same ones all the time: The
secondary contracts could be for a limited period - are given
to both, thus allowing analsyis of any weaknesses and subsequent
improvments for all the other markets. Again this would be a
matter of cost but it should certainly increase confidence in
the system if both contractors are providing broadly the same
results and enable weaknesses to be investigated where there
is a significant difference.
What you think? Please
E-mail
your comments.
|