Freddy AI for CX
Deliver effortless customer experiences with Freddy AI.
A complete guide to creating split experiments and improving website conversions
Learn all about Split URL testing, and how the use of this technique can increase your website conversions in concrete ways. In this comprehensive guide, we cover:
Split URL test consists of taking two or more variants of a web page and dividing your users website traffic between them. The goal of the test is to determine which variation provides the best performance, as defined by the parameters of the test.
Variants are fully developed web pages, which are stored on the server, and are accessed via different URLs. The mechanism used to split the incoming traffic across variants is known as redirection, and for this reason, the technique is also often known as a redirection test. Often, an existing page is used as a control and the variants are used to gain insights into the various aspects of its performance.
On the surface, a Split URL test looks just like an A/B test, and is in fact quite often confused for the same process. Both types of testing are conceptually similar, as they both involve multiple variants of a web page to determine which one converts the best or has more effective user engagement. However, they are most effectively used together, rather than alternatives to each other.
Let’s start with breaking down what an A/B test does. It takes two similar versions of a web page, which generally have a minor change at a page element level. The variable is often a design or content change in a single place. For example, the colour of a CTA button or the title of the page could be a potential variable. The test is then set up, indicating the different versions. When the test is executed, all the versions are accessed by the same URL. A/B testing is a great optimization tool for individual web pages, and is often used to conduct quick tests.
Learn more about how to effectively use A/B testing as a part of your testing process.
On the other hand, the variants of a split URL test vary at a page level, rather than at an element level. As long as the overall goal of the page remains the same, the variants can be dramatically different. In fact, a common use case of this CRO technique is to test out redesigned websites, using the existing design as a control.
When to use what
As we said before, A/B testing and split URL testing are not mutually exclusive, but complementary processes: split URL testing is for big changes, and an A/B test is for optimizing the existing page.
We recommend using a split URL test to give your brand new redesign a dry run, page by page. Keep your existing web page intact as a control, and host the variant on a different URL. Once the test has reached its conclusion, use those insights to pick the one with the best performance as a control for your A/B tests. Then, optimize the redesign further by changing individual page elements. Through this process, you can save valuable resources by taking out the guesswork from your redesign at an early stage, and focus on improving only the version that has a proven track record.
Multivariate testing is fundamentally a subset of A/B testing, but it involves the modification of multiple page elements at a time - or indeed the modification of multiple properties of a single page element. If this test was to be configured manually, using A/B testing, setting up the multiple variants would be a time-consuming process, as the number of variants multiply with any modifications.
This is best illustrated with an example: Consider the 4 versions of a mobile web page in the image. The banner is the modified page element, however it changes in two ways across the tests. First, the colour property is either blue or red, and the second change is its placement. Totally, this yields 4 combinations for one web page, even though the element in question is the same. Multivariate testing automatically tests each version for the best performance, avoiding the need to create several individual A/B tests.
Multivariate tests differ from split URL tests in the same ways as A/B tests do. In multivariate tests, variants are run from the same URL, as opposed to the different URLs used in Split tests. Secondly, although multivariate tests are larger in scope than A/B tests, the variables are mostly again at a page element level. There may be many combinations of these element changes, but they do not constitute major differences.
Multivariate testing is most effective when used to optimize web pages with high traffic. Due to the number of combinations that can result from making multiple changes to page elements, the tests tend to take more time to reach statistical relevance.
As we mentioned before, Split URL tests have a larger scope than A/B tests. They are more flexible because the parameters you can change and test are more diverse. Keeping that in mind, you can look at an existing web page and analyze it beyond just its UI elements. But before we dive into the benefits of this approach, let’s start with how to set up a test:
Make a list of the problems you see with it. Ask questions like: What are my goals for this page? Why do I feel those goals aren’t being met right now? What are the aspects of this page that I feel could be better?
Put together all the information you have about the web page. Pull together statistics of user behaviour, check analytics, and ask for opinions. Correlate with user feedback through various support channels.
User feedback plays a huge role in designing a great website or product. Implementing the opinions of existing users increases their tendency to identify with your product, and therefore their inclination to use it.
Make an assumption based on your insights: if you change X, Y will happen as a result. This hypothesis becomes the framework of your test, and how you determine whether or not it was a success.
Structure your hypothesis in the form of if-then statements. For example: if I change the layout of the web page, then it will be easier to read and therefore the bounce rate will reduce.
Design a new web page using all the information at your disposal, finding solutions to better reach your goals. Since a split URL tests can accommodate much bigger changes than A/B tests, you can consider making changes like altering the workflow of a group of pages. For example: can the checkout process be easier? Should the cart open on the side of the page, rather than take the user to an entirely new page? And so on.
Identify the KPIs you want to track for the test. Typical KPIs for split URL tests include conversion rate, downloads, and bounce rate, among others. Match these to the relevant parameters for the test.
It is important to let the test run for an adequate amount of time. There are many one-off occurrences that can skew the results of a test, and time is the only way to counteract any spikes. This is known as allowing the test to reach statistical relevance.
A/B tests and multivariate tests are great tools, because they enable marketers to test out different variations for UI elements easily. Testing tools used for A/B tests usually have an easy to use interface, and the variants are created with just a few clicks.
However, the ease of setting up these tests also becomes their limitation. It is not feasible to make radical changes to a web page, using just an interface. More often than not, a new web page involves the combined effort of the design, UX, and development teams, thus making it altogether a larger exercise. Split URL testing allows you the freedom to makes these changes, without any limitations, and yet leverage the benefits of testing variations. Therefore, split URL testing is not just a great tool, but a powerful one as well.
What can (should) you do with Split URL test?
Try out a radical new design, using the existing one as a control: Compare reports between the variants to assess which aspects work better in each.
Run tests with non-UI changes: Switching over to a different database, or trying to optimize page load times? These are examples of development or backend changes that impact web pages. Split URL testing enables you to ensure that the invisible changes don’t adversely impact user experience.
Change up the workflows: Workflows are user journeys across a group of web pages, intended to achieve a certain goal. Split testing offers the functionality of testing groups of web pages together as a unit, so restructuring user workflows and paths can be measured for effectiveness. Workflows have a dramatic effect on conversions, and testing new paths before implementation is a great way to determine if there are any sticking points that were overlooked. In fact, there are greater insights to be gained by testing webpages that belong to one workflow together, as each webpage in the group contributes to the user experience of the whole group.
Some of the benefits of split URL testing are common with A/B and multivariate testing:
Metrics, or KPIs (key performance indicators), enable you to create a framework for the results. By defining these at the start of the test, you get more insightful reports to base your decisions on.
Conversion rate: How many visitors were turned into customers?
Engagement: Are users interested in the content?
Metrics should ideally be derived from your business goals, and then translated into your website goals.
For instance, if your website is a product marketplace, potentially your top-level business goal would be to sell products. This may translate into a series of website engagement goals: show popular products to a user; encourage the user to review products, and thereby spend more time on the website; and so on.
Then, you would need to analyze the different elements of your website funnel to see how various user actions translate into them attaining your desired goal.
To continue the example from above, sending people email reminders a week or so after delivery of their purchase will bring them back on the website to leave a review.
Let’s say that the problem statement in this instance is that although people click on the email, and arrive at the product review page, a large percentage leave the review unsubmitted. What could be holding them up? The review page becomes a good candidate for review, and the metrics you would want to measure are the bounce rate and the clicks.
Since the variants of a split URL test are fully developed web pages, stored on the server, and accessed via unique URLS, there is a lot of flexibility when setting up a test. Here are some of the different ways you can configure your tests to bring up variants:
Plug in the full URLs for the control and the variants in the test. This approach works well if the test pages are standalone. For instance, if you have a Features page to test, and a couple of variations on the design, this would most likely take the form of:
Variant 1: http://www.mywebsite.com/features_vt1
Variant 2: http://www.mywebsite.com/features_vt2
Test groups of related pages together, by providing a base URL and indicating that all subpages are part of the test also. This is a particularly useful function of a testing tool, whereby entire workflows can be tested out at once. The insights of these tests would be of greater value, as the entire workflow is taken into consideration. A good use case of this type of split URL test would be the sign up process. At the very least, there is a form and an acknowledgement, which then potentially moves into a marketing page. Assuming that all the webpages to be tested as a part of the workflow are subpages within login, an example of this sort of redirect is:
This option is a lesser known version of split URL testing. It is a code-centric approach, and necessarily requires the involvement of a developer. Event-based redirection refers to the kind of test that executes a portion of code, depending on whether or not a user has been redirected to a variant. These makes for more complex test cases.
When the experiment is over, you are presented with the report. This is the point at which you ask questions of your results: Do they confirm your hypotheses? What actions are you going take as a result of the experiment?
This is the point where your metrics become useful. Reports are great visual aids to understand and process data, and thus help chart a course forward. If your test was successful, and showed that a particular variant did significantly better than another, the way ahead becomes clear.
There is the possibility that tests are inconclusive, and perhaps all variants performed at much the same level. In this case, metrics need to be revisited. Additionally, you could consider segmenting the audience, and discovering trends in usage that way. We cover the benefits of user segmentation in the next section.
Ultimately, arriving at the perfect state for your product or website is an iterative process of improvement. Continued experiments will yield actionable insights that are implemented in development, product or design changes, in addition to raising more questions. The tests will get more refined as experience is gained, and become a valuable tool in your arsenal.
You can and should use segmentation to make your tests smarter and gain better insights. It enables you to understand and optimize for different groups of your users, giving them the best possible experience of your website and increasing your goals specific to target markets. Segmentation is also sometimes effectively combined with personalisation to heighten the sense of a tailored user experience.
For instance, an untargeted test may appear to be doing poorly. However, in one particular segment, it might actually have excellent results. The law of averages is cancelling out that spike, by collating the data together. If you haven’t segmented your users, you would have missed out on that particular insight.
Segment your users based on dimensions provided on your dashboard. Once you have data on which ones are your most profitable segments, you can target further split tests towards those users, and focus on optimizing their experience.
Avoid making the segments too small or too specific. Segments should be closely related to your test hypothesis, and be large enough to generate sufficient data for results.
URL parameters are an integral component of a URL request, and are used to gather information about the user. For example, UTM is used to indicate the source of the visitor.
Within the context of a Split URL test, the original control and the variants should all receive a copy of the dynamic query parameters, so as to retain the integrity of the data, and to ensure that the experience on all versions is consistent.
There are quite a few factors to think about when choosing the best testing tool for your needs. Consider factors that will save you time in the long run, and prove to be an asset for your productivity:
There will need to be some engineering involvement for the initial set up, but each tool varies in its complexity. Look for a service that has solid technical documentation and rapid integration. Get an engineer to weigh in with their opinions too.
Choosing an overly complex tool may give the impression that it has more features. This is not always the case. You want a testing tool that helps you get started, and thus get results, quickly. Intuitive interfaces and well-designed dashboards make the testing process more effective.
A good testing tool will indicate when the test can be stopped. It does this by ascertaining that the test has reached statistical significance. This is an important feature that helps with mitigating the SEO impact of split URL testing.
Can you perform user segmentation easily? Does your split URL test report include a heatmap? These are qualitative insights which make your reports more meaningful and yield actionable results. Read more about how heatmaps visually present insights about your users’ behaviour.
From an SEO perspective, there is the factor of duplicate content to consider, when conducting a split URL test.
Search engines define duplicate content to be exact or extremely similar chunks of content across multiple web pages, with only minor differences in images, design, or text. Undoubtedly, duplicate content does present a challenge that affects search engine user experience, so there are means to account for cases where it becomes necessary.
Let’s look at how a search engine views duplicate content. A search engine indexes pages to maintain relevancy for search results, and as part of its user experience, it will avoid showing pages with similar content. Thus their algorithms will consolidate all the pages with similar content, display the original or best one as per their discovery, and filter out the other pages.
Search engines also have to contend with the repetition of content in the case of deceptive SEO practices. Google has been known to delist websites from their search results, if their algorithms have found duplicate content being used to manipulate search results. However, they rarely impose a penalty, as is commonly feared, as Google has explicitly stated that they recognise cases where duplicate content is unavoidable: mobile and desktop versions of the same page, for example. In fact, search engines do a fairly good job of handling duplicates on their own.
However, it is something best not left up to chance, as there are reasons – apart from the rarely imposed penalty – why you wouldn’t want your variant to show up in search results, as opposed to your control. Let’s look at the recommended ways to handle this.
These pointers can help you rank better in search engines and amp up your SEO efforts.
Indicating a canonical web page tells search engines to crawl it more regularly, and in case there are other pages with similar content, to serve this one up in the search results instead of the others. It is important to be aware though that marking a web page as canonical is along the lines of a stated preference. There are several ways search engines decide on which web page is displayed, and the tag merely indicates the preference of the website owner.
There are two types of redirect within a split URL test: a 301 permanent redirect, or a 302 temporary one. Using a 302 redirect instead of a 301 will signal to search engines that the redirect is temporary and therefore not to replace the original URL with the new one in the index.
Google wants to see the pages that are shown to actual users, because any other behaviour is considered deceptive. It is common advice to use a noindex tag on test web pages, so that search engines don’t index or rank those pages at all. However, this is perceived as “cloaking” to the search engine, and can potentially affect the website negatively. Another version of cloaking is to serve content based on user-agent. Again, the same pitfalls apply.
Tests require adequate time to complete, and to generate conclusive and valid results. There are several ways to determine whether the test has concluded or not, and it is based on conversion rates and traffic to the website, among other factors. Once the test has reached statistical significance, the changes need to be implemented and all test elements removed.
In the event you are split testing a web page that doesn’t get organic search engine traffic anyway, like an acknowledgement for a form submission, then there is a case to be made for ignoring the SEO angle altogether. These pages will probably never show up on search results, and therefore it doesn’t matter if they are indexed correctly or not.
The idea here is to avoid using resources to SEO optimize pages that won’t affect SEO in any case.
You can test anything on a web page. Literally. However, that doesn’t mean that you should. If you have a good team around you, chances are you will have a solid bank of ideas of how to make your website perform better. So sort out the ideas according to business goals, feasibility, and time to test. You want good results, but you want them fast.
Often, pages cannot be looked at in isolation, as with a funnel, it takes several pages working in harmony to reach a desired user goal. The same principles apply when identifying which ones are best optimized.
Testing consumes resources, so you want to maximize the output you get from any test you run. Let’s look at some of the things you can consider to achieve optimum efficiency
Optimization only works if the objective is clear, definable and measurable. It is best to consider the big objective – the organizational goal – when designing tests, and not to get absorbed by the individual page goals
Insights from previous tests can guide your future tests considerably, allowing you to test smarter each time. Old tests will indicate what works and what doesn’t, so you don’t have to look at those elements again. Store results and what actions were taken based on those results.
Come up with widely differing variants to cover more ground on an individual test. Once your general direction is established, you can then optimize the page further with smaller tests.
It is tempting to make changes to the control before the test completes. However, this can alter the results of the test. So it is best to wait until after the results are in, and then run a new test.
One-time events, like a news article, can sometimes drive up traffic, and change results. It is best to disregard the results from this period of transient visitors, and consider only those on either side of it.
Running tests take time, especially if you want to get valid insights. People need to participate in the experiment for the test to generate robust and unequivocal results. Letting the test run its course will eliminate surge events, and present a more evened out report of user behaviour.
Split URL testing is a great addition to your conversion rate optimization deck of tools. So now that we’ve covered all the aspects of split URL testing, all that’s left to do is implement it.
34 Inspirational Quotes to Inspire your Marketing Team
Top 5 features that you should look for in your analytics tool
Get Deeper Insights From Visitor Recordings
Announcing Freshmarketer’s A/B Testing Responsive Editor 2.0
Sorry, our deep-dive didn’t help. Please try a different search term.