In Defence Of The Humble Ad Pre-Test

In Defence Of The Humble Ad Pre-Test

In this guest post, Visit Victoria’s head of research and insights, Julian Major (main photo), weighs in on the oft-loathed ad pre-test and says the practice doesn’t deserve the negative press it often receives…

Creatives don’t like ad-testing. A big generalisation, I know. But I can empathise. Someone works on a solution to a brief, only to be met with a poor response by consumers in an artificial and contrived environment. Recently, I’ve seen pre-testing described as ‘snake oil’. Is this actually the case?

Many criticisms around ad-testing are valid. The big question of whether performance in ad-testing is predictive of in-market success is incredibly difficult to prove. Many brands will not pre-test at all. Those who do may not launch an ad if it tests poorly, or significantly change it before launch. So, the final sample of ads in-market is biased. And that is before getting to the question of what defines in-market effectiveness. To prove that ad testing works is to prove an ad works in-market and while not impossible, will be difficult for many brands.

Research is an artificial task, and this is often referenced with focus groups. Being in a room with eight people, discussing in depth something you would pay scant attention to real life is unusual. So is answering questions in an online survey on an ad that someone has forced you to watch probably more than once. Some will also strap machines to you to get biometric responses, which is just a tad unusual.

So, I think we are right to be sceptical of any grand claims made by research agencies on the power of ad testing and it is not wrong to reference weaknesses of a methodology.

But does that mean there is no value in pre-testing? I do not think so. After all, we know the alternative is not great. Nicole Hartnett and others from the Ehrenberg Bass Institute found that marketers’ predictions around sales effectiveness of advertisements ‘were correct no more often than random chance’. Marketers are good at what they do but they are not advertising clairvoyants and we should be sceptical around people professing that expertise trumps all else.

Instead, we need to think more humbly about the value pre-testing provides.

While we know there is not a magic formula to create the perfect sales-effective ad, there are some basics that most would agree with. And pre-testing can help with these.

  1. An advertisement can not be effective if we do not know who it is for. Yet correct brand attribution for individual ads can be abysmal. This is easy to measure.
  2. Advertising needs to cut through. There are lots of ways to measure this. Some will espouse biometrics, but it’s also broadly accepted that advertising likeability is a valid measure. Sometimes, likeability will also help brand attribution by drawing more attention to the ad (Though it could have the opposite effect in some cases – all the more reason to test!)

Obviously, the actual measure of success is whether it changes a behaviour, but it is more difficult to understand this from a pre-test. You can ask a brand funnel measure pre and post exposure, or just simply ask directly, but this is where the criticisms around pre-testing become very legitimate. A single ad can not be expected to change brand preference (But rather, nudges propensities), and a persuasion model of advertising is flawed.

But testing can help us understand if the ad is liked, whether it is correctly attributed and if not, why. Testing should not be a pass/fail mechanism but a tool to help us build better ads. Does no one know the ad was for your brand after watching? If so, are there distinctive assets or more direct branding that can be used at key points? If humour is used, is this only funny to the people who wrote the joke? Do people simply not understand the message or attribute you are trying to push?

There is also a time and place, and a right way and a wrong way to do ad-testing. Qualitative and quantitative methods serve different purposes at different times and the answer of  which to use is always ‘it depends’. There is also the reality that financial restraints mean testing will not always be possible. If a test will cost a lot relative to your working media costs, then maybe leave it. But many brands will pump a lot of money into expensive and important ads. They are often part of a broader platform or campaign that adds up over time and a test may be a small cost to help improve something substantial. While the industry has rightly moved away from continual Link tests until a magic number is hit, we need to be wary about moving too far in the opposite direction. We can make research better, but it will never be flawless, and that is ok.

 

 




Please login with linkedin to comment

Julian Major visit victoria

Latest News

Nielsen Data Reveals Brands Spending Big To Attract Aussie Tourists
  • Advertising

Nielsen Data Reveals Brands Spending Big To Attract Aussie Tourists

Nielsen Ad Intel data has revealed that the travel and tourism industry spent more than $153 million on advertising in Australia in Q1, 2024 – an increase of 8 per cent from the previous quarter, with TripADeal the biggest spender, followed by Virgin Australia, then the Flight Centre-owned Ignite Travel. As many Australians return from […]

Cosmo Returns To Australia!
  • Media

Cosmo Returns To Australia!

Ever get the feeling we've weirdly warped back to 1988 at the moment? Confirm it with the relaunch of Cosmo in print.