Split Testing Vs A/B Testing: When to use what

You’ve probably heard the popular saying that “comparison is the thief of joy.” But when it comes to different versions of your website, comparison isn’t necessarily a bad thing.

When you test different variables on your website, your emails, or your social media posts, you can uncover which ones perform best. Most advice about testing suggests testing as many variables as you can as often as you can to make sure that the final version you’re offering up to customers is optimized to the max.

But there are several different types of tests out there, and it’s not always black and white when it comes to understanding which tests to run or when to run them. For example, split testing and A/B testing. Are they the same thing? If they aren’t, what is the main difference between the two? And when should you use what?

Before we can answer that question, let’s go over the main differences between split testing and A/B testing.

The difference between split testing and A/B testing

Split testing and A/B testing are terms that are often used interchangeably. Split testing is a method of testing where a control version of your website is compared to a completely different version to find out which one site visitors respond to best.

A/B testing is a testing method where a control version is compared to a variety of small, single changes of that control version to determine which version performs the best.

Here’s an example of these tests in action:

If you wanted to split test one of your website’s pages, you would send a portion of your traffic to the “control” page and the other portion to the “variation” page. These landing pages should have the same conversion goal (download an e-book, share your email, etc.). But there should be a difference in the design of the pages, like different headlines, different forms, or different photos. These pages should be hosted on different URLs.

(Image Source)

The winner of the test that earned the most conversions should then be used as the control for an A/B test.

For an A/B test, test the winning sample against several single changes, like a few different text colors (blue vs. green vs. orange).

(Image Source)

Now that you know the difference between split testing and A/B testing, when should you use what?

When to use split testing vs. A/B testing

A/B testing and split testing shouldn’t be mutually exclusive. Instead, they should be used as complementary tests that can work together to get the most out of your site’s pages.

It’s best to use split testing for making huge changes. And then use an A/B test to optimize an existing page further once that considerable change has been made.

For example, if you want to redesign an entire web page, keep your current web page as-is for control. Then, host the variant page with a different URL. Once the test has ended, use the results to choose the page that performed the best. This page can then be used as the control for an A/B test. From here, you can optimize the winning page even more by changing smaller, individual elements of the page.

These tests can help you uncover reliable data without needing to have a huge amount of traffic. To find out how large your test sample size should be, use a tool like Evan Miller’s A/B Test Sample Size Calculator.

(Image Source)

Enter the current conversion rate for the page you’re testing and the minimum relative change in your conversion rate that you want to detect from the test.  The calculator will then give you a sample size to use per variation.

To carry out your split tests and A/B tests, use Freshmarketer’s Conversion Rate Optimization add-on to make testing easy.

Freshmarketer’s split URL testing and A/B testing features

With Freshmarketer’s split testing, users can test different pages or upload them with a Simplified Regex targeting option to make testing groups a breeze.

(Image Source)

Keep query tags intact by redirecting the request with query parameters from the control page to the variation page easily. That way, you can record valuable information about the source of the traffic.

This tactic can be helpful for helping you determine and target different high converting visitor segments. Or make decisions about your marketing budget for various campaigns.

Integrated heat maps are included for every variation of a split URL test. All of the conversion data can be visualized with maps of top hotspots on your pages.

You can enable these heat maps with just the click of a button.

(Image Source)

Use our A/B testing to optimize variations in real-time. You can even integrate our tool with Google Analytics for more detailed insights.

(Image Source)

Create variations of your site’s pages instantly with our WYSIWYG visual editor.

Freshmarketer will also collect visitor behavioral attributes that you pre-select. So that you’ll always get the right conversions from the audiences you want to target.

(Image Source)

And the tracking doesn’t have to stop with clicks. You can also measure revenue with a revenue tracking goal to find the actual value of every click you generate.

No matter the testing tool you choose to use, you should never quit a test early. Here’s why.

Never quit a test early

Even if it seems like one version within your tests is winning, you shouldn’t ever stop a test early.

Never quit running a test until you reach the sample size that you predetermined before setting it up.

This theory has been demonstrated in previous research, where tests were run longer than initially planned by two times, four times, and eight times to test the effect of length on confidence levels.

(Image Source)

The study showed that the longer a test was run, the higher confidence levels in the results were.

You should never run a test longer than necessary. This can result in sample pollution.

The longer a test runs, the higher your chances of external factors'(technical issues) impact, and lead to inaccurate results.

For example, running a test for more than three to four weeks can cause sample pollution. Since many people may delete their cookies within that time frame and interact with a variation that is different than the one they originally saw.

After you run a test, you should always make changes based on your results.

Make changes based on your results

If you aren’t making changes to whatever you’re testing based on the results you find, you’re missing the point of testing in the first place.

If one variation that you test turns out to perform better than the others, then that variable is the winner.

To carry out the results, disable the losing variation and make the winning one the default.

If no variation comes out to perform better than others, that means that the variables you chose to test didn’t have an impact on customers or site visitors at all.

In that case, consider the test inconclusive, continue using the original version of the page, and move on to another test by deciding on another variable or group of variables to test.

Apply the results from previous tests to future changes that you make.

For example, sometimes you find that using numbers in the headlines on your site brings in more conversions. Then, you might want to consider also testing numbers in email headlines to see how they perform there.

And you should increase the total amount of headlines on your site that include numbers. So that you can uncover the best performance possible.

Conclusion

Comparison isn’t always a bad thing.

The comparison involved in split testing and A/B testing can help you uncover which version of a piece of content that prospects and customers respond to best.

With these tests, you can optimize just about anything from your website’s design, the copy, the headline, or the photos you use.

Split testing compares two different variations that are entirely different from one another and should be used when making significant changes.

A/B tests are best when used to make smaller changes on the design that was the “winner” in a split test. This change could be something as little as font color or font style.

Start using both split tests and A/B tests in a complementary manner to get the most out of your web pages.