How To Avoid Messing Up Your Email A/B Testing Oct 8, 2019Views: 288
Testing only works if you know exactly what you’re looking for.
“Take a risk and keep testing, because what works today won’t work tomorrow, but what worked yesterday may work again.” – Amrita Sahasrabudhe
Here are our top split testing tips on how to design and tweak your A/B testing to generate meaningful, actionable results.
In this article, we’ll cover the following top split testing tips for your email campaigns:
- Developing a useful hypothesis
- Isolating variables
- Segmenting traffic
- Getting a big enough sample size
1. Be Clear on What You’re Testing (and Why)
Before you start a split test, you need to establish a hypothesis. In other words, you need to pinpoint what you think the problem is before you test it.
Let’s say you want to A/B test your email subject lines. You need to have an inkling of what you think does or doesn’t work in order to construct your test.
For example, you might have read that email subject lines with emojis get 29% more opens than those that don’t and you want to test if that’s true for your campaigns. This gives you a clear variable to measure in your split testing. It means you can interpret the results and use your findings to improve future emails.
Compare that to simply testing two subject line variations that you think sound good without paying attention to what in particular makes them different. Without hypothesizing how this will affect open rates. Sure, your split test will tell you which one was more popular, but you won’t know why. You won’t learn anything useful or replicable for the future.
2. Test One Variable at a Time
You could decide to A/B test any element of your emails. That could be the length of the subject line, the preheader text, the positioning and colour of the CTA button, the layout of the email body, inclusion of GIFs or any other factor.
The important thing is that you choose one of these at a time for any given test. Otherwise, how will you figure out which change made the difference? Your final stats will be meaningless.
3. Be Smart About Segmenting Your Traffic
You need to track a single variable at a time, but you need to be aware that other factors beyond your control will affect these variables. For example, people might behave differently depending on whether they open your email on different devices, browsers, days or time of day, geolocation, or according to whether they’re new or returning visitors.
Ideally, you would only ever compare like for like. So, if you are A/B testing two types of a CTA button in an email, you would also exclusively compare results within the same segment. For example, the activity of people who opened the email on the same kind of device, on the same day of the week, and so on.
In reality, though, this is impossible. If you segment your results so drastically that you only compare the behaviours of Canada-based women in their 40s who opened the email in Safari on their iPad on a Wednesday afternoon, your sample size will be too small to yield useful insights and you’ll discount valuable user analytics from other segments.
How to deal with this problem? The answer is simple: be realistic.
Choose one or two factors that you believe have the most significant impact on this variable and segment using those. For instance, in the CTA example, whether the person opens the email on desktop or mobile is probably more relevant than the browser they used to do so. Comparing the results of your split testing within these two broad categories will give you meaningful insights, without getting so bogged down in the details that you can’t see the wood for the trees.
4. Get a Big Enough Sample Size
While we’re on the subject, it’s vital that your overall sample size is big enough to return meaningful results. Many people aren’t sure exactly how to do this, especially if they’re not comfortable statisticians. Free tools like this one are really helpful for calculating the same size you need to be confident in your results.
Simply enter the baseline conversion rate, which is the current or expected conversion rate you’re working with for that variable.
Then, enter the minimum change in conversion rate that you want to monitor. For example, if you are only interested in tracking increases of 10% or more, enter this.
Finally, if you want to, you can adjust the “statistical significance” figure at the bottom. 95% is the industry standard, but if you want total confidence in the results, raise this higher. If you’re willing to accept a lesser degree of certainty, you can bring it down. You may need to do this if you’re working with smaller potential sample size.
The figure at the bottom will now tell you how many conversions you’ll need to track in order to get reliable results. Is it higher than expected? Don’t curtail your A/B testing until you get there.
Split testing is complicated enough without having to incorporate multiple technologies to manage your experiments. Make life easier for yourself by opting for an email marketing platform with A/B testing features built-in. Then, as useful as our top split testing tips maybe, you might not even need them.
Ready to get started with your first A/B test campaign?
This article was published on 3 October by EmailOut and can be found here.
Open your Unlimited Sends one-month free trial today – after your first month with us you can switch to our FreeForever account giving you 12,500 sends to 2,500 contacts each and every month for free, forever. Corporate email marketing? Contact us.
You need to be logged in to comment