How to test your programmatic creatives like a Professional

According to Ipsos, over 75 percent of advertising impact is determined by quality.

And yet, creative messaging seems to be an afterthought next to ad placement and targeting.

I get it. Making the perfect creative requires some patience. But if you have the budget for it, testing programmatic display creatives is worth your time. I’m talking consistent , a dramatic increase in performance and insights that can be used in other channels. For the best results, you’ll need to thoroughly test programmatic display creatives.

Creative testing

The first method is to test lots of creatives at one time with the aim of gathering insights. This is typically for short campaign periods (two months or less) with larger budgets. This is great for quick insights but does not give 100 percent certainty of what performed well and for what reason.

The second method, which I recommend for longer campaigns (e.g., always-on accounts), is good old A/B testing. This lets you really find out what’s working and what isn’t, even with smaller budgets; just extend the test to match the size of the budget!

The real trick is to know how and when to combine these two methods.

Creative optimization

The standard method of creative optimization is a one-size-fits-all approach, involving analyzing creative performance across the account and optimizing by selecting the top-performing creatives and eliminating the others. I recommend a more granular strategy where the creative analysis and optimization are both carried out at the placement level and for each targeting type.

For one client account, we set up a test of five creatives (one generic, the rest with more focused messaging) in five different sizes.

Finely splitting out the DoubleClick Campaign Manager account, we also duplicated those creatives for six different targeting types, which meant we were testing 150 creatives in batches of five at once. Using proprietary in-house creative optimization and reporting, we identified the two best-performing creatives from each set and eliminated the others.

Here’s an example:

This data-driven system led to a 24 percent increase in click-through rates (CTRs) and an 87 percent decrease in cost per action (CPA) week on week.

But testing isn’t finished at this stage (Is it ever?). When you are left with two creatives, it’s the perfect time to implement an A/B test to find out exactly which features perform best for different audiences.

What should you be testing?

A good place to start is Google’s best practice advice using:

  • A clear design and call to action (CTA).
  • Updated creatives to avoid creative fatigue.
  • A sharp and noticeable brand.

The two main things to think about are aesthetics and messaging.

For aesthetics, you could be looking into animated versus static creatives, background, text colors and logo placement.

For messaging, CTA is the major issue to focus on, as it has a huge impact. The focus of the messaging should go through a vigorous testing progress.

For example, working with a company in the finance sector, we categorized ads by “benefits” or “incentives” and found that “benefits” ads performed better. We could then use the results to test whether benefits should be quantitative or qualitative, and if quantitative, we could then see what kind of quantitative benefit ads might perform best.

Here’s a Gantt chart demonstrating what I mean:

This allows for a very flexible strategy while getting a lot of great insights.

Another plus point is that preparing a chart like this allows everyone to know where the creatives are headed and allows designers prepare for all eventualities. It also helps to plan creatives in advance while helping tie display activity to the overall content strategy.

It’s important to continually think about the value content adds to customers at specific stages of their journeys and to experiment with it to let data drive creative decisions and optimization.

To close

One final thing to be aware of with A/B tests is false plateaus.

An A/B test will help you find the closest peak performance, but maybe not the highest peak.

It’s important to occasionally throw in a random creative and see if it performs better than your current best-performing creative. Think of it as an A/B/C test, if you will.

If it does beat your other creatives, you can work out what performed well and optimize for that. You’ll still have useful insights from the original A/B test regarding messaging and aesthetic features, which can be used to design the next test.

You might also like

Comments are closed.