Essentially a method for user experience research, A/B testing is slowly evolving and finding its place in marketers' profiles. Design, coding, and copies are something you create from scratch, but how can you keep improving on them? Run A/B tests. Learn more about user experience, design, color preferences, content that resonates more with your visitors, page layouts, and a lot more with A/B testing.

What is A/B testing?  

A/B testing is the technique of comparing two different versions (A and B) of the same webpage to determine which one performs better. The versions are shown randomly to different users over the same timeframe. A/B test is not necessarily between just two variations. You can also create multiple variations, and these experiments are called A/B/n testing. 

By splitting the traffic between the two versions in the A/B testing process, you can analyze how your visitors interact and engage with these versions. This lets you determine the version that records higher visitor engagement or conversions. 

Applying the winning version optimizes the webpage for better performance. In simple terms, you can create different versions of your landing pages, and compare elements like headings, banners, CTAs, to understand what your audience prefers and deliver a better website experience.

Components of A/B Testing 

When it comes to  A/B testing, it’s important to know the basic terminologies involved. When you run an A/B test, there are three components involved.

The Control: The control is the current page that you are looking to replace. 

The Challenger: The challenger is a different version of the control with all the changes you want to test. The challenger is tested against the control to determine if the changes impact the engagement or conversions.

The Winner: The winner is the version that records the best conversion and engagement during your test. If your variant, version B records more conversions than the control, the variant is the winner and you can optimize your control to include changes in the variant. 

If your challenger, Version A fails to yield better conversions or engagement than the control, then the control version is the winner.
 

Benefits of A/B testing

As a marketer, you want to make the most of every user interaction. When it comes to conversion rate optimization (CRO), A lot of things don't go as planned because your marketing funnel is broken at different points. You face form drop-offs, cart abandonments, high bounce rates, etc. These leakages result in losing quality traffic, low conversion rates, and hence, revenue. 

Fixing these gaps is not a single-shot exercise. You need to understand the user behavior to know why they aren't taking the desired action. And these actions cannot be based on “gut feeling or assumptions”. You need qualitative insights to back the changes you’re willing to implement on an existing system. Your analytics tool like Google Analytics gives you very limited information. The data from running these experiments form the basis of your improvement cycle. 


Here are some benefits of A/B testing:

Reduce your website bounce rate

The bounce rate is a good indicator of your website's performance since it measures the number of visitors who bounced from your webpage. A high bounce rate could be because of no clear CTAs, too much/little information on the page, content not aligned with visitor's persona, and so on. With A/B testing you can make iterative changes to elements on your webpage and test them to find the better performing variant, and keep visitors on the page for longer.

Validate your website changes easily. 

You'll never know how right your website redesign or changes are until your audience sees them. With real user data at hand, you can make website changes in a controlled manner. A/B testing ensures that you're aware of the impact of the changes before you make them live.  

Improve website user experience. 

Every page on your website exists to serve a purpose, and if it fails to do so, it makes for a bad user experience. Think of all those times you visited a website to find something or take action, and it didn't let you do that easily. Every such interaction takes your visitor away from a meaningful engagement or conversion. With A/B tests, you can drive improvements in your user's journey. Start with setting hypotheses for changes and validate them with data from A/B testing. 

Convert more of your recurring visitors. 

As a marketer, you know the effort and time that goes into getting consistent, quality traffic. With A/B testing, you can maximize the ROI from your existing traffic by making small incremental changes over time.

Drive continuous website improvement. 

A/B testing allows you to think in terms of regular small, incremental changes,  whether it's a change in the headline text, a new CTA copy, or a button placement change. The micro gains resulting from every such iteration add up to more significant improvements.

How to run an A/B test? 

Just like any other marketing exercise, A/B testing requires a strategy. Having a systematic framework in place makes testing manageable and efficient. Here’s how you can go about it:

Do your research.

Before you start, look into the current state of your website. Use website analytics to know the current number of visitors, most visited pages, bounce rate, conversion rates, and so on. On a website front, check out what an average visitor session is like. The easiest way to do this is with heatmaps. Heatmaps show you where visitors spend the most time, their scrolling behavior, interaction with other page elements, clicks, etc. You can also use tools like session replay to see how a visitor interacts with your page. All this data helps you make better decisions.

Formulate a hypothesis.

The data you gathered will help you identify the problem(s) and leaks on your website. For instance, one of your pages has a lower than average conversion rate. You look into data to speculate what might be the reason and suspect that the value proposition is not compelling enough. Although purely conjecture, this becomes the hypothesis for your test, and the A/B test results will either prove or disprove your hypothesis.

Set a goal (baseline conversion rate).

Choose the primary metric that will determine the success of your hypothesis. A baseline conversion rate is the current conversion rate of the page you’re testing. The end goal of your experiments would be an increase in this value.

You should also set a minimum detectable effect, which is a relative percentage of increase that you would like to see in your winning variation. You need to set this before you start your experiments so that you can estimate how long your tests should run, and how much traffic you might have to allocate.

Create a challenger to challenge the control page.

Once you have defined the hypothesis, you'd know what you want to change on the current webpage and the outcomes you expect. Use this information to set up an alternate version of the webpage. This version is called the challenger. The challenger will compete against the original version of your page, called the control.

Split your audience.

A/B testing requires splitting your traffic into equal parts for the control and challenger variations. A 50:50 traffic split ensures your results are not skewed. You are now prepared to run the test.

Run the test.

Now that you’ve set everything up, start the test. Use an A/B testing tool to split your audience and schedule your test.

Monitor the data.

Keep a close eye after setting things in motion. If you’ve got a clear winner, you can end the test, and if you feel like you don’t have enough data to come to a conclusion, keep it going. A clear winner cannot be based on your biases. The points influencing your decision should be the statistical significance achieved, the data you have collected, and the time elapsed.

End the test.

Ensure that you’ve achieved a statistical significance of 95%. Statistical significance denotes your risk tolerance and confidence level on your experiment results. You should ideally run a test for 2-4 weeks, but this depends entirely on your traffic.

Analyze the results and see who’s the winner.

Now you have the performance data for both the variations. See which variation performed better, and try to understand why. All these questions help you understand your visitors and their behavior a little better. You can choose to deploy the challenger variation or stick to the control page.

A/B testing is a continuous process. One test will not answer all your questions. Keep trying to improve the visitor behavior, and keep running more tests. There are many types of A/B testing that conversion rate optimization tools let you run like Split URL, multivariate tests, multi-page tests, which might sound similar, but they are not.

Types of A/B tests

Having so many testing tools available may add to your confusion, so let’s look into what these are and what they do:

Split URL testing

With split URL testing, you can compare different versions of the same page hosted on different URLs.  The incoming traffic gets split between these two versions and the performance of both is tracked to determine the winner. As opposed to basic A/B testing where you experiment with tiny elements or minor front-end changes, split URL testing is used when both the versions differ significantly in terms of design or code.

Multivariate testing

A multivariate test or multinomial test is an extension of the typical A/B test. While a standard A/B test allows you to test one element or variable at a time, a multivariate test enables you to test more than one element on a web page. While with basic A/B testing, you can compare a CTA button copy, a multivariate test allows you to test different combinations of headlines, sub-headings, and CTA’s. It’s important to note that multivariate testing works well only for websites with reasonably high traffic.

Multi-page testing

Multi-page testing, also known as funnel testing, is another form of A/B testing where instead of making changes to a single page, the changes are implemented on a sequence of pages. Basically, you can test elements across various pages and see the impact of these changes across the buyers’ journey on your website. Multi-page testing is commonly used to test different content tonalities, design theories, or sales and support strategies.

What all should you A/B test? 

A/B testing opens up a lot of possibilities for optimization for marketers. There’s more to A/B testing other than experimenting button colors. You can test anything on your website that can have an impact on your visitor behavior. Here are some commonly tested elements, known to drive results.

Headlines, subheadings, and text

If you're deciding the page headlines and subheadlines without putting together 4-5 other options, you're not doing it right. Headlines greet your visitors as they land on the page and go a long way in deciding how well they'll engage and convert. 8 out of 10 visitors read just the headline and decide whether they are interested in reading it further. Your page copies also impact conversion. You can put various aspects to test in this case - long vs. short pieces, casual or formal tone, and so on.

Call-to-action buttons

Your call-to-action could make or break a deal. It clearly defines the purpose of the web page. Even in emails, a single CTA increased clicks by 371%. CTAs are the starting point of conversion. They should be compelling enough for the visitor to take action. All factors like a copy, button style, color, placement on the page have an impact. You have a lot to test, and a lot more to gain out of it. 

Page layouts

Should a particular section be above or below on the page? Should the image be followed by the CTA or the other way round? Page layout ideas are always up for debate. A/B testing different layouts can help you understand what works better and optimize conversions.

Images

For a landing page of a mobile app, does a masthead image with only devices work better or someone using a device? Should you use a picture with a white background or a dark background? Images are known to improve on-page engagement. What makes it better is testing out what kind of images lead to better engagement.

Content depth

How long is your page? Do your visitors prefer in-depth long-form content or enjoy snippets of information? Test your content depth to understand how far your visitors willingly scroll.

Web forms

You move towards conversion once a visitor leaves their information on your web form. You can create different variations of your forms to decide how many fields should it have, and run form analysis too.

The sky's the limit with A/B testing. You can experiment with pretty much any element on the page which might nudge your visitors towards higher engagement or conversion. Now that you know which elements you can experiment with, you should also learn about the challenges it comes with. 

Challenges of A/B testing

Making A/B testing work comes with its own set of challenges. It is still a relatively new and evolving domain, so it's likely for marketers to find themselves stuck at some point. You can take note of some commonly faced challenges and learn how to overcome them.

Not knowing what to test.

Getting started with A/B testing can be difficult. With lack of knowledge about A/B tests and how to begin, worrying about spending weeks testing something that might be insignificant is normal. Look closely into existing trends on your visitors and go beyond conventional analytics, make an educated call on what could drive more impact, and whether you should test it.

Setting the hypothesis.

Getting the hypothesis right could take some extra effort from your end. Research is key in A/B testing, and what you should fixate on. Dive deep into your website data to understand funnel leakages and why they’re happening. Derive the hypothesis based on this research and get testing.

Determining the sample size.

When starting with A/B testing, you can struggle with the lack of guiding principles for selecting a sample size for your tests. You need to understand statistics, data, and its impact on your existing traffic. Read up on selecting sample size, and start experimenting. It will take a few learning cycles to start getting it right.

The need for interdisciplinary knowledge.

Running A/B tests requires basic cross-functional knowledge, if not expertise. You have to deal with problems in design, UX, IT, etc. A/B testers and website optimizers often understand design, UX/UI, data, visitor personas, and a lot more.

Getting a buy-in from senior management.

Since the A/B testing needs speculation and hypothesis, you might have to make a case for the elements you want to experiment with. The leadership will have questions about the effort required vs the return to expect. A lot of success depends on embracing the culture of testing. A/B testing is about incremental, micro gains, and testing should be encouraged. Get your data in place and make sure your tests have the least negative impact to convince the management.

Common mistakes while A/B testing:

A/B testing is like running a controlled experiment leading to statistical results. If you don't follow the right procedure from start to end, you will end up with inconclusive or incorrect findings. Here are a few mistakes to avoid during A/B testing.

Not selecting the right hypothesis.

A lot of times, marketers get so overwhelmed with how A/B testing can improve their funnels that they jump into running a test without researching a hypothesis or end up testing something on a whim. You need to understand the website problems, pick one of them, identify possible reasons, and run a test based on that. Your hypothesis should make sense as a solution to the issue you're trying to fix, both in terms of impact and practicability. A wrong hypothesis will leave you with incorrect findings or a late realization that the test didn't matter at all.

Running multiple tests at the same time.

It's wrong to assume that you can scale up your testing by running multiple A/B tests at the same time. There are chances of overlapping of traffic or interactions among them. It leads to skewed results, and it becomes hard to attribute the impact of the changes made. You should use Multivariate testing or multi-page experiments if you want to run parallel tests.

Ending the tests too early.

If you feel like it's time to stop the test and declare a winner because "you can see a clear winner", you must ask yourself why you set out to test in the first place. Do not rush or give in to convenience. Let the truth prevail.

Not accounting for seasonalities and external factors.

Run your tests for at least one full week. It lets you even out any unusual trends your traffic follows on certain days of the week. Even better if you run it for a couple of weeks more. Avoid running A/B tests during periods where your traffic shows a divergence from its usual behavior. For example, Cyber Monday is a bad idea if you have an eCommerce website because of the seasonal surge in traffic. Similarly, if you are running campaigns driving traffic to your website, you're better off keeping your tests paused.

Running A/B tests for pages with low traffic.

There will be instances where running an A/B test will not make sense. The marketers must learn to accept it and spend time on it. Think of a web page with about 500 visitors a month or a couple of conversions. Running an A/B test on it could take months to reach statistical significance. Even if you have a clear winner, in the end, you've spent a lot of time getting to that conclusion.

Not selecting the right tool.

Using the right platform to run A/B tests is an essential ingredient for succeeding at it. An ideal A/B testing tool should not slow down your website, should be feature competent, should compliment your conversion experiments, and offer integrations with other platforms.

Giving up too early.

You might have had a good streak of beginner's luck, but A/B tests usually don't work that way. Don't worry. It's not just you! Your first few A/B tests might fail or barely offer anything exciting. Staying iterative is the key. Keep on setting up follow-up tests by including learnings from the previous ones. Doing it right will bring results to you, sooner or later.

Considering that you’re going to be running these tests on pages with high traffic, another mistake would be to forget about search engine optimization. The reason you have traffic coming to your pages is because of months of SEO-effort, and an A/B test might reverse those, thereby reducing traffic. 

A/B testing without impacting SEO

Google encourages marketers to run A/B tests on their websites. When asked about how A/B testing can impact website performance, they put out this blog post as an answer. “We’ve gotten several questions recently about whether website testing—such as A/B or multivariate testing—affects a site’s performance in search results. We’re glad you’re asking, because we’re glad you’re testing! A/B and multivariate testing are great ways of making sure that what you’re offering really appeals to your users.”

In the blog post, Google has put together some of the best practices for marketers to follow on A/B testing without impacting search ranking:

Ensure there’s no cloaking.

Cloaking means showing a different version of your page to the Googlebot than what you show your visitors. It’s a way of abusing the traffic segmentation feature and is against Google’s webmaster guidelines. This can lead to your website being demoted or even removed from the Google search results.

Use rel=”canonical” links.

If you’re running tests with different URLs for variants of the same page, you must use rel=”canonical” on all the alternate URLs. This informs the Googlebot that the original URL is the preferred one and should be indexed. It explains that there’s no duplication of content (which is a tactic many use for SEO), and can affect your traffic negatively.

Use 302 redirects, instead of 301.

If your A/B tests redirect the visitor from the original URL to a variation URL, you need to point out to Google that this is only a temporary arrangement. You can do this by setting a 302 redirect which indicates a temporary redirection. Don’t confuse it with 301, since that indicates a permanent redirection and should not be used here.

Run the test only as long as necessary.

If a test is run for an unnecessarily long time, the Googlebot might interpet this as an intention to deceive the search engine. It is afterall a bot, which might end up taking action. Ensure you set a test up for a pre-defined time, and split the audience equally.

Once you have finished your testing, make sure you update your website and pages, and remove all the test variations and redirects.

Ready to start testing?

Now that you have all the information, choose a great A/B testing software or conversion optimization tool like Freshmarketer, and get started. Test multiple elements, understand more about design, colors, and your visitors, monitor visitor interaction through heatmaps, and deliver a fantastic website experience.