Mobile navigation

EDITOR'S ARCHIVE PICK 

Testing, testing, can you hear me?

How often do you test? Never? Once in a blue moon? All the time? The right answer is the latter, but for too many publishers, testing is something they never quite get round to doing. Which is a shame, because it will be costing them money. Jenny Moseley has some dos and don’ts for effective testing.

By Jenny Moseley

Testing, testing, can you hear me?

When our esteemed publisher asked me to write an article on testing, I rubbed my hands with glee. As a long-time direct marketer, I fret considerably about how decisions on future strategies are being made by subscriptions departments. True, we live in a time-poor world, but for me there is no excuse for cutting the time you spend analysing your results and planning your future. Corners are being cut a bit too much these days because the next ten deadlines are pressing.

You are testing aren’t you? Testing, validating, back-testing, tweaking, improving, looking at figures every which way? You’re not testing on a whim are you? A good idea from your publisher may not be a good idea in reality. Try not to do the test just to convince someone else that it wasn’t a good idea.

So I’d like to make my first plea. Take some time out for sensible analysis (and I don’t just mean taking for granted what you see on a flat sheet of analysis).

With today’s modelling technology you should, and I say should because not everyone uses the tools as well as they might, be able to predict what’s going to happen with a test. I could go into the mighty and magic formulae which predict test results, but today I want to concentrate on common sense ‘reality-check’ thinking.

Targeting is twice as important as the copy and creative combined.

I’d suggest that you start with an audit of what you’ve done, and what you are currently doing before you start thinking about what you want to do in the future (which is, of course, always more exciting).

Why do I say this? Because over the past umpty-ump years, I’ve seen so much money wasted on tests that will never deliver the right results.

I’ve been a judge at the major awards, both within the publishing arena and out there in the broader world of direct marketing. These days, many facets of a campaign’s planning and execution are considered by the judges, including the results. It’s when I start to get into the detail of how a test beat a control and how a subscriptions manager is going to roll out the test that, sometimes, I get cold chills running up and down my spine, and have visions, nay nightmares, of money going down the drain.

Be disciplined about recording at least these five variable elements: target audience, offer, timing, creative and copy.

Let’s look at some of the most common mistakes and how to correct them.

  1. No control communication.
  2. Whether it is a mailing pack, an email message, a page on a website, a radio ad, a space ad, no matter, you must have a control against which you measure future activity. Well, you have to start somewhere, so the first time you do anything, that’s the control and trying to improve on its performance kicks off the testing programme. But you need to document what the control was because three changes down the line you might not remember the clever decisions you made. Start to be disciplined about recording at least these five variable elements: target audience, offer, timing, creative and copy.

    And here is my second plea; design a template in advance that you complete religiously for every communication, with as many variables in as possible noted in more detail than the three words you can usually get in, in the room allowed for the ‘description’. Make this searchable by every element, so that you can refer back to it and establish exactly what you were doing when a test succeeds or fails.

  3. Too many variables.
  4. It is probably time and budget and downright impatience (or dare I say it, lack of skill) that governs what is tested, but what is important is that you establish in advance how you are going to read that test. If you’ve changed the pack size, the letter, and the inclusion of a brochure or not, then that’s three tests at least and if it beats your control you’ll have no idea what worked and what didn’t. Test one thing at a time, or change everything, but keep the tests pure and measure against the trusted control.

  5. Too small a sample size.
  6. This comes up a lot when I’m looking at awards entries. Most commonly, it’s the gross volume of a test that people think about when selecting a test cell. 1,000 pieces of direct mail for instance might sound like a reasonable test, but not when you are only expecting 1% response. That means that you’ll be basing decisions on 10 people’s responses and that’s not statistically valid because I bet there are multiple and unknown variables in there already. So, think about the number of responses that would be statistically valid in your circumstances and gross it up.

  7. No validation of tests.
  8. If a test has beaten your control by 30%, that might lead you to make it a new control, but beware of those multiple influences that might have uplifted or even depressed that response; a nice sunny day when they read your communication, or the threat of flood water through their front door; a tax rebate landing on the mat, or a tax bill; or the mother-in-law coming to stay; good and bad news can affect how a person reacts to your communication. My technique was always to validate a test in the next campaign, double or triple the audience perhaps, but never to gamble that the test would work as well the second time around. The law of diminishing response will undoubtedly catch you out.

  9. No roll out potential.
  10. Let’s say a test you have completed has outperformed your control by more than 100% (yes it does happen). You get all excited and tell your boss and they start thinking that you’ve cracked it and that they can put their feet up and relax, until you tell them that you can’t find any more prospects or customers in the universe to roll out to. Say goodbye to that salary increase.

    You must have a control against which you measure future activity.

  11. No written briefs for production and execution.
  12. If you have ever picked up a job part way through, you’ll know what I mean. You have decisions to make, budgets to submit, and you really don’t know what has gone on before. How are you going to do your job without knowing what your predecessor intended? (Remember, try and be kind to your successor).

    Subscription marketers make a hundred decisions a day, but not knowing why a month later is a sin. It’s an even greater sin if you work with outside suppliers. Here’s another scenario. You didn’t write a concise brief, but what you wanted is contained in a string of emails to your usual account handler who then leaves. The campaign works, so you want to repeat it and you ask the new account handler to do it again. Do what again? Too many nuances in a string of email instructions is not the right way for the new account handler to get the work done and there is so much room for error, which you may not discover until months down the line.

    So, what happens? You can’t repeat the test communication, roll out with it, change it, or bin it until you can pull together all the bits and pieces of information you need and trust it.

  13. Testing the wrong element.
  14. Much as I will stand up and fight for great creative and fabulous copy, I know that added together they will not make half the improvement that good targeting of quality data will.

    Years ago, a direct marketing friend wanted to prove this point and persuaded a client to do a test. (The offer and timing stayed the same.)

    Test 1 was a fabulous piece of creative and copywriting, but it was sent to an unrepresentative audience. Test 2 was a workmanlike, not brilliantly creative, but not a reputation-damaging piece either, that was sent to a truly targeted audience. Test 2 outperformed Test 1, by almost 100%.

    Then, when the great creative and copywriting that had been put together for Test 1 was combined with the clever targeting of Test 2, the response was uplifted again. I can’t recall by how much, but the important thing to remember is that the targeting is twice as important as the copy and creative combined, so changing a blue heading to a red heading is not going to hack it.

  15. Mixing up the audience.
  16. In subscription marketing, the renewals series is where I would want to put extensive testing time, energy and budget, as good efforts bring rich rewards, but oft times it’s the hardest to control.

    Sounds simple doesn’t it? We have a control and we have one test cell. There are to be 1,000 names in each and five notices in the series. The control audience has a response of 10% to the first notice, leaving 900 to receive renewal notice 2. The test audience delivers 15% response to the first renewal which pushes 850 records into the second notice. Then, when the selection of the 1,750 records is made for the second notice, you can bet your bottom dollar that 900 will go into the control audience for renewal 2, and 850 will go into the test cell, but will they be the same people as received the control or test for renewal notice number 1? As soon as data is switched, the test becomes meaningless, usually wastes a fortune, and you have to start again. It’s even more horrible if the test was a 50% discount (which now seems to have been withdrawn) – customer service can easily get a bit nasty!

  17. Using data protection wording that seems more like a threat than an invitation to part with personal data.
  18. Now, here is my third plea of this piece. Test the wording on your data protection statements, and test the layout of your coupon / registration screen. Opt-4 has been working with a number of publishers to redesign their coupons and reposition and re-write the data protection statements. We’ve got rid of the legal jargon and have replaced it with customer-friendly copy written statements using the same inspiration and effort that has gone into the sales copy. The results show a significant increase in the amount of permissions that individuals have given for the future use of their personal data.

    Alongside that, we’ve seen an increase in response (and yes, we did have a validated control and text matrix). Consumers are getting wise to the value of their data, so being transparent and honest, displaying best practice, and using the benefits of future offers to induce them to part with those precious personal details really works. A correctly gathered permission from a ‘warm’ individual has got to be worth at least twice as much as the cost of a rented name, hasn’t it?

  19. Well, there had to be a number 10, didn’t there?
  20. And all I want to say is, for heaven’s sake get your control packages based on firm foundations, and TEST, TEST, TEST. Can you hear me?

    Design a template in advance that you complete religiously for every communication.