“Statistical Sirens Drowning Our Media Budgets (& My Will To Live)”

“Statistical Sirens Drowning Our Media Budgets (& My Will To Live)”

In this guest post, Duncan Jones (pictured below), digital strategy director at MediaCom Australia, says we’re an industry obsessed with numbers. But, he argues, the numbers can lie…

We bloody love a statistic in media.

As we guide clients through the sometimes murky waters of media, stats are like beacons reassuring everyone that we are on a sure course towards a ‘consumer connection’ – or so it would seem. As an industry, I think we need to understand exactly what these figures are telling us and how they have been reached, otherwise we’re in danger of falling victim to sirens who would lead our media dollars astray!

Duncan_Jones

Only last week, I was sitting in a fascinating research presentation around how we should understand the true impact of video advertising. With stats being bounded around like, “Targeting increases ROI by up to 27 per cent” and “11 per cent increase in viewer attention when creative is concise”, I couldn’t help but find myself shuffling media dollars in my mind and preparing myself to preach to clients. This was all until I discovered the research was based on 60 people.

While there are undoubtedly lessons to be learnt in what the research says, the stats that come with them create an expectation. If I make this change, I will see that result too. I wish that was the case, but unfortunately the majority of times it just won’t be.

To continue to see past the percentages and stay on track, I would say there are three key questions we need to keep asking ourselves.

Who did the research?

Or perhaps more importantly – who’s funding it? There are essentially two ways that research funds itself (1) someone is paying for it as “sponsored research” or (2) people are paying to gain access to research reports which have been completed by third parties.

Now call me a cynic, but if someone is publishing results of sponsored research, it’s likely to tell a story they can profit from – yeah, you know who you are! In the instance where there aren’t independent third-parties such as Nielsen & Millward Brown involved to ensure an unbiased perspective, we’re at risk of accepting inflated results of people who have effectively marked their own homework.

Who were they asking?

In this instance, size isn’t everything, but for me it’s important!

For results to be viable, we need data sets which allow for statistical certainty – namely a confidence level of 95 per cent and margin of error of 5 per cent. The data set therefore needs to match the size and distribution of the large buying audiences it is often representing.

Beyond the geeky stuff, size intuitively tells me that the sample is inclusive, has depth and a diversity of opinion. It gives confidence that the results that are produced are not just a publishers mates telling them they’re great down the pub, but rather, they seek to encapsulate some kind of ‘realness’ of the audience they are prophesying for.

Too often stats based on a publishers specific or niche audience will be blown out to represent all Australians of a certain demographic, which in reality will not be the case. If inclusivity at scale can all be proved to exist, then I am ready to believe.

How were they asking?

‘Does my bum look big in this?’ will invariably lead to a 0 in 10 people agreeing with this, unless you want a slap!

There are too many instances where measurement is made too easy with self-fulfilling questioning or conditions. Take the trusty brand uplift study, which we all use to define success of our campaigns.

Asking, “Do you remember seeing an ad from brand X”, to me is the same as asking “Don’t think of an elephant”. By framing the question in a different way, the innocent formulaic nature of a brand effect can quite easily be twisted to produce a snappy ‘10 per cent uplift’ for a funder’s case study.

We need to check that the method and conditions of research are structured to allow the respondent to not be hamstrung. Instead, they need to be free to reveal their true ‘media-selves’ and only then can we take them as valid.

Three simple questions but unfortunately these probably won’t give you the answer – sorry. However, what asking these questions will do, is give us an understanding of the validity of statistics plonked in front of us on a daily basis.

At MediaCom, our specialist teams of Business Science and Data Insights are our first port of call and at the heart of our client and consumer understanding. By leveraging the understanding of first party data they create and overlaying this with robust industry statistics, it allows us to triangulate a truer course toward the ‘consumer’.




Please login with linkedin to comment

agency Criteo Designworks Duncan Jones einsights

Latest News

The Mars Agency Announces Latest Findings Of Retail Media Report Card
  • Advertising

The Mars Agency Announces Latest Findings Of Retail Media Report Card

The Mars Agency has developed a scorecard that assesses the capabilities of leading platforms across key criteria required to optimally plan, execute, and measure effective retail media programs. The scorecard aims To help brands efficiently evaluate their spending options across retail media networks in Australia (and New Zealand). With spending on retail media advertising in […]

TV Ratings (27/03/2024): Jungle Members At War Over Concealed Lipstick
  • TV Ratings

TV Ratings (27/03/2024): Jungle Members At War Over Concealed Lipstick

A heated argument between two jungle members did the numbers for Ten last night, with I’m A Celeb obtaining a total national reach of 1,282,000. Fans were delighted as Candice Warner and influencer Skye Wheatley got into it over a stick of lipstick, leading Warner to dub the Instagram star “selfish.” Wheatley, best known for […]