How To Ace the Creative Test

Many advertisers run their ads through a feedback process before the ads go live but, in my experience, creatives whose work is to be pre-tested rarely know how the tests work.

The opaqueness makes the testing process unnecessarily adversarial: instead of providing a feedback loop, testing is seen as a black box that doles out electric shocks.

Whether intentional or not — and some research companies such as Ameritest are remarkably open about their methodology — the opaqueness benefits nobody. Assuming pre-test results are a reliable predictor of ads’ future marketplace success, every party will benefit from agencies being able to “study for the test”: pre-pretesting the work ahead of the big day on their own, and making iterative improvements in an attempt to boost the scores.

So if your work is about to be pre-tested, here’s what will be on the test.

There are two ways to gather pre-launch feedback on an ad: either by asking a few people deep, open-ended questions and summarizing and interpreting their responses (“qualitative”), or having many people do something that can be easily tabulated and combined into a score (“quantitative”). Usually, clients rely on quantitative research to make “go or no-go” decisions, especially if the media budgets are substantial.

Different research companies have their own quantitative techniques, ways to calculate the scores, and opinions about what a good score is. Generally, quantitative testing techniques can be:

  1. Direct. Show the ads to respondents in a survey and ask them questions about the ad.

  2. Experimental. Show the ads to some survey respondents and not to others, ask all of them questions about the brand, and then compare the answers of those who were shown ads with those who weren’t. Some techniques measure the differences in participants’ performance on different tasks: choose something, remember something, notice something.

  3. Bio-feedback. Show the ads to survey respondents who are hooked up to devices that measures their heart beat, brain waves, skin conductivity, and eye movements.

Some tests combine several techniques, but many use only one.

Nielsen, Kantar Millward Brown, Ipsos, and Ameritest are the four big research companies in the US that do pre-testing, but there are at least two dozen smaller others. Many of them offer multiple types of testing, but the formats that rely on asking a few hundred people questions in an online survey are the cheapest, fastest and the most common.

Research companies have hundreds of questions in their reserves, but there are only eight types of questions that matter.

Comprehension
These questions measure whether participants understand the ad correctly and can play back the key idea, benefit, or message you are trying to communicate:

  • What was the company trying to tell you?

  • What was the main message or idea of the ad?

  • Which of these themes was the ad about? (followed by a list of options)

  • Was the ad clear or confusing?

Likeability
There's a popular theory that the degree to which the ad is liked predicts how effective it is (see papers), and many tests ask participants a “liking” question:

  • How much will you enjoy seeing this ad on TV?

  • Did you like or dislike this ad?

  • Was the ad appealing?

Emotion
It’s become common to ask respondents about how the ad made them feel, usually by picking from a list of words such as “happy”, “surprised”, “confident”, etc. There’s not a whole lot of consensus around which emotions should be on the list, or which indicate a superior ad: envy, for example, is a negative emotion but a powerful purchase motivator. Here’s a solid list of 26 emotions along with an explanation why each was selected (pdf).

Breakthrough and Attention
There are a lot of ads out there, and advertisers want to know if theirs is going to stand out:

  • Did the ad get your attention?

  • Is the ad interesting or involving, or boring?

  • Would you notice it if it were on TV?

  • Would you watch it again if you saw it on TV?

Some tests ask whether the ad was relevant or relatable, which is intended as a way to measure both attention and retention:

  • Is the ad/product relevant to you?

  • Have you learned something new from the ad?

  • I can relate to the people or situations in the ad

Memorability
It’s not clear whether someone can predict what they will remember, but it doesn’t stop the tests from trying:

  • Does the ad stick in your mind?

  • Was the ad unique?

  • Was the ad different from ads for other similar products?

Brand Linkage / Branding
There are several different ways to measures whether people will remember the product or the brand after seeing the ad:

  • What brand was the ad for?

  • A question with answer options that range from “The ad could have been for any brand” to “You can’t help but remember the brand”

  • Ask respondents to describe the ad and then count the number of times the brand is mentioned.

Persuasion and Motivation
Measures whether the ad does what it needs to do: change one’s opinion about the brand or product, or make them want to do something. The questions are usually straightforward:

  • This ad made the company seem more/less appealing

  • The ad makes me want to buy the advertised product / visit website / look for more info

  • The ad makes me more/less interested in…

Negatives and Disaster Check
These types of questions look out for people who have strongly negative feelings about the ad.

  • Irritating

  • Boring

  • Misleading

  • Confusing

“Offensive” is not something a lot of systems check for, although they probably should.

Even if you know nothing else about the research company or the test itself, you can improve the ad’s results by “studying” to these eight types of questions. Use these questions to roll your own questionnaire, and pre-test your ads using one of the many inexpensive DIY survey platforms (I recommend my friends at AYTM).

While your survey is fielding, catch up on the debate about pre-testing with the famous 1974 book Testing to Destruction (pdf).