Publish Once, Distribute Everywhere » Get Early Access

Conversion Rate A/B Testing: A Practical Guide Watch Overview

Conversion Rate A/B Testing: A Practical Guide

Unless you test, you’re operating based on assumptions that may well be false.

But when you begin to test those assumptions, you start to operate based on real, scientifically-proven data instead—and the impact on conversion rates (and overall growth) can be highly significant.

So how do you start to A/B test your conversion rates?

That’s what we’re about to find out.

Unfortunately, approaching A/B testing in the wrong way can lead to some very costly mistakes. But follow this guide, and you’ll be on the right track.

First of all...

What Is A/B Testing?

You may be more familiar with the term, “split testing”. It means the same thing.

Essentially, A/B testing means testing a particular aspect of your marketing to discover how your market actually responds rather than how you assume they’ll respond.

So you’ll have the original version of something—this is known as your control—and then you’ll set up a different version where you’ve changed a particular element.

Without A/B testing, you're operating based on assumptions that may well be false. Start testing, get real scientifically-proven data, and watch your conversion rates grow.Click To Tweet

You might for example be testing your conversion rates on a particular landing page, and want to know whether a proposed new headline outperforms your existing one, or not.

So you run an A/B test to find out. Whether or not the test was successful, you learn something new about your market, strengthening your future marketing efforts in general.

Once the test has finished (which you’ll find out how to determine below), if the new version “won”, that becomes your new control. And you start to test something new.

For best results, aim to be continually testing at least something—and watch as your conversion rates grow over time.

Where Can You Use Conversion Rate A/B Testing?

You can run conversion rate A/B tests on any aspect of your marketing where your goal is get someone to take a specific action. In other words, to convert them in some way.

This might be on your website, on landing pages, in your advertising, in the emails you send out, in social media posts, and so on.

Effective testing isn’t necessarily confined to the exact moment when a visitor becomes a new lead or a prospect becomes a new customer. There can often be a number of steps leading up to that point—often referred to as a funnel—and each of those steps can be tested independently.

For example, for an ecommerce store, think about the number of steps that are often required for someone to put something into their basket and then go through the checkout process.

And thanks to the effect of compounding, a modest improvement in your conversion rate for each step can lead to a much larger overall conversion rate increase.

More on compounding below.

What Kind of Things Can You A/B Test?

The list of what you can test is virtually endless. Start by testing things that are likely to have the greatest impact on your conversion rates.

These include:

  • The headline (and any sub-headlines) on your page
  • Similarly, the subject line in an email
  • The layout of a web page
  • The positioning of key elements
  • The image used in advertising
  • The call to action, including the language used and the color of any button
  • The font, such as the ‘font family’ and size
  • The offer—what does someone get when they take the required action?
  • For sales offers, the price and any guarantee
  • The first paragraph or two of text on a page, email or in an ad
Whether or not your A/B test was "successful", you win. You learn an important nugget of information about how your market best responds.Click To Tweet

How Do You Start A/B Testing?

Start by deciding on a hypothesis you want to test.

A different word for hypothesis is assumption.

You might assume that changing that headline to a new one you’ve crafted will boost your conversion rate. But will it actually?

You might assume that changing your price will damage your conversion rate to the extent you become less profitable. But will it actually?

You might assume that putting an optin box to the right of your video rather than to the left will improve conversion rates. But will it actually?

These are all assumptions or hypotheses that need testing to check their validity, rather than relying on your gut, advice you’ve received, or past experience (that may have been in a different market or had a different source of traffic).

Got your hypothesis?

Great! Now it’s time to test it and see if you’re right or not.

(And by the way, if it turns out you’re not right, great, you’ve learned something new about your market. On the other hand, if your hypothesis was correct, great, you’ve just improved your conversion rate. Either way, you win!)

To test it, you need two different versions of your marketing “piece”—whether that’s a web page, landing page, email, or whatever.

Make one change at a time that you’re going to test, or you can end up with misleading results (more on this in the Common Mistakes section below).

Then you want to send a roughly equal amount of traffic to each version so you can compare the data.

For A/B testing on your website, you’ll usually do this using some purpose-built software that makes the whole process easy, and allows you to monitor the results of your test as it runs..

Click here for different options you can use—the first two can be used with any website you control, the others are specifically for sites built on WordPress.

When you’re testing online ads such as on Facebook or Google, testing functionality is usually an integral part of the relevant platform.

Some email campaign services also have some level of testing functionality built in—here’s a list of several of them.

For other types of tests where you can’t split the traffic between different versions within the same time frame—such as if you’re testing organic social media posts—run the different versions at different times, and aim to keep all the other variables the same as much as you can.

For example, post at the same time on the same day of the week, but in a different week.

What Kind of Results Can You Achieve?

In terms of improving your conversion rates, A/B testing is one of the most high value activities you can do.

It’s not unheard of—in fact, it’s relatively common—for conversion rates to at least double as a result of A/B testing.

Let’s imagine you were trying to improve your sales conversion rates. So you have the same amount of traffic coming to a sales page and go from converting 2 out of every 100 visitors, to 4 out of every 100 visitors.

Nothing else has changed—but you’ve just doubled your revenue.

Now let’s imagine you’re also running some tests on the origin of that traffic. So let’s say you’re running some tests on some advertising and manage to double the click-through rate (i.e. the conversion rate on the ad).

So now you’ve not only doubled the conversion rate on the page, but you’re also sending twice the amount of traffic. That means you’ve now quadrupled your revenue.

This is why conversion rate A/B testing is such a powerful tool.

But how realistic is it to double your conversion rates?

Sometimes a single test can double (or more) your conversion rate.

But, more commonly, it takes a series of tests to achieve it.

And this is where the value of compounding comes into play.

This works in a similar way to compounding interest on a savings account.

In fact, to double your conversion rate, all you need is a series of three tests where one of them gives you a 20% lift, and two of them give you a 30% lift.

Of course, that’s not the only combination of test results that will give you a 100% increase overall. But it’s a helpful illustration of how compounding works in A/B testing.

Thanks to compounding, modestly improving the conversion rate for each step in your funnel (through A/B testing) gives you a larger overall conversion rate increase.Click To Tweet

How Do You Know When an A/B Test Is Conclusive?

An A/B test should only be ended once the results are conclusive.

For newcomers to A/B testing, it can be easy to give into temptation and believe that the early success of one version, perhaps with results well ahead of the other, means the test should be ended early and the “winner” chosen as the new control.

Those more experienced recognize that those early results can quickly turn around.

Over time the “underdog” can begin to catch up and even overtake the winner completely, with a clear result the other way.

In fact, it’s an all-too-common experience.

It means that the consequences of ending the test too early can be very damaging.

If the result is not scientifically conclusive, by choosing the wrong “winner” you are potentially decreasing your conversion rates and hurting your bottom line.

So how do you know when a result is scientifically conclusive?

It’s all about reaching what’s known as statistical significance, or a statistically significant result.

Once reached, it’s then safe to end the test and pick the winning version as the new control.

In other words, the chances of the result changing and the “winner” and “loser” switching places are remote.

The phrase “statistical significance” can sound a little intimidating, but the good news is that it’s rarely if ever something you have to work out yourself.

Most of the tools that help with A/B testing do it all for you, and will let you know as soon as a “winner” has been confidently found.

The main point to remember is not to pick a winner yourself because of some early, apparently promising results. It’s likely to be illusory. Instead, wait until the tool you are using has declared a result, one way or the other.

What Are Some Common Mistakes With A/B Testing?

1. Testing with Insufficient Traffic

To test successfully, you need enough traffic seeing your different versions to enable you to reach a statistically significant result within a reasonable time frame.

If you have too little traffic, there simply won’t be enough data to be able to draw a meaningful (i.e. scientific) conclusion.

2. Testing More Than Two Versions at a Time

This is similar in nature to the above.

If you’re testing more than two versions of something, achieving a statistically significant result is going to take a lot longer to achieve, simply because you need a lot more data.

And while you’re waiting for a result to eventually come through, you could have run some other tests in the meantime and be benefiting from the results of those.

So unless you have such a high level of traffic that it justifies running more than a two-version test, stick to A vs B.

When A/B testing to improve conversion rates, test just one change at a time, or you can get misleading results and miss some big "wins".Click To Tweet

3. Stopping Before a Statistically Significant Result Has Been Reached

We covered this in the section above, but it’s worth repeating.

Don’t assume an early result that looks promising is conclusive. It most likely isn’t, and until statistically significant, is not a scientifically proven result.

Hold your nerve, and wait for your tool of choice to declare a proper winner that you can have confidence in.

4. Assuming the Result Will Hold With a Different Traffic Source

The result of a conversion rate A/B test is only valid based on the variables in play at the time of the test.

One of those variables is where the visitors to the page come from.

So if you tested two headlines, the winning headline may in fact be different based on the traffic source.

For example, visitors from Facebook and visitors from LinkedIn can behave and react to elements on your page quite differently.

The same can even apply to different ads even when the only difference in the ad is a change in the language used. You might well be attracting a different type of visitor with different motivations, and so they need treating in their own unique way.

This is why it’s best to have separate landing pages based on traffic source, and test them independently to each other to maximize conversions from each one.

5. Testing More Than One Thing at a Time

It’s often tempting to try to test more than one change at a time. For example, testing a new headline on a page alongside a different color button.

But doing so can seriously undermine the impact your test could otherwise have on your conversion rates, and give you some very misleading results.

If say your new headline gave you a 20% conversion rate increase, but the different color button decreased conversions by 30%, the net result is a 10% conversion rate decrease.

And you completely miss the impact that testing the new headline on its own could have had on your conversion rate.

To Conclude

When looking to improve your conversion rates, A/B testing is one of the highest value, most reliable activities you can undertake. Yet surprisingly, few businesses do much if any testing, preferring to rely on their own (often inaccurate) assumptions.

The good news is that, as we’ve seen, A/B testing is easy to start doing. And for improving conversion rates, the sooner you start to collect data on what your market best responds to, the better.

Get the #BeEverywhere Newsletter:

@bySteveShaw

Steve Shaw is the founder of EverywhereMarketer, and has ran online businesses for over 20 years, serving over 13,000 customers in 137 countries. EverywhereMarketer helps you grow online visibility, attract more customers and grow your business across multiple channels.

Please note that the content on this blog is free because it's reader supported—some content contains links to third-party products and services for which we may receive a commission if clicked and a purchase made.