Data has reshaped the way businesses market their solutions to buyers. Data from different touchpoints help businesses determine the reach, the reception, and the conversion. The influence of data is so crucial that when you run a new experiment, statistics play a crucial role, and we show you how.
Whether it's about optimizing your primary headline or changing the home page content, A/B testing gives you the data to make informed decisions.
Businesses, big or small, have in the past turned to such tests to make informed decisions. That’s exactly what search engine giant, Google did. Google gets most of its revenue through Google Ads which appear on the results page when you run a search. When users click on the ad, they go straight to the web page/ landing page linked to the ad. That's how Google ads help businesses get more users to visit their page.
This brings us to how this whole model works. Google makes revenue when there are more clicks to the Google ads.
The challenge was to choose the right shade of blue on Google Ads’ hyperlink that would drive more clicks. And the team ran an experiment with not two or three but 41 shades of blue. The result? They not only zeroed in on the ideal shade of blue for their hyperlinks. The decision to use the right shade of blue paid them off big time, by $200 million!
Like Google’s test, you may have a winner at hand. You have a shade of blue for your hyperlink. You apply it across all your web pages and wait for the $200 million revenue bump. But instead, you realize that visitors are not clicking your links.
It is quite possible that when you run a test with a larger sample size, your real-time response contradicts what you’ve seen in your tests. So how can you determine if the change you measure through your A/B test is not influenced by randomness? The answer is through statistical significance.
Businesses do a lot of things to have more people try their solutions. They spend time working on landing pages that are included in their online ad campaigns. And when campaigns are successful, they bring a lot of visitors. But what if the website has all the elements in place except a blocker preventing visitors from signing up? That's where A/B testing is impactful—find out if changes you perceive actually increase conversions.
To understand statistical significance, you need to know the two main components of every A/B test:
Statistical significance tells you how confident you can be about your A/B test. Statistically significant results rule out the possibility that the results of the A/B test were not influenced by random error or chance.
Let's imagine you run an A/B test and the control variant is your existing home page. The test variant records better conversion, and the result holds a statistical significance of 95%. This means there is only a 5% chance that the result will be different or inconclusive when you repeat the test.
Behind every A/B test, you invest time and resources to create the variants required to run the test. It is essential to get conclusive results backed by real-time data to make better decisions. With a statistically significant result, you can expect that the page will give similar results as it did with your sample size.
For example, let's say you want a 50% boost in traffic to your website during a Black Friday sale. However, you cannot wait for Black Friday to test the variants and optimize the page for the following year's sale. Statistical significance helps you guess how your webpage will perform when opened to a fraction of your target audience.
Statistical significance also helps you gauge the number of people you need to ensure that your sample size represents your target audience. Assume that the winning variant of your Black Friday page registered a 70% boost in conversions. But after launching the page, the result is contrary to what the A/B test suggested. With a statistically significant A/B result, you can be assured that your winning variant will give you the expected conversion as your test results predicted.
The most memorable sampling error occurred during the US presidential election in 1936. Alfred Landon was running against the incumbent President, Franklin D. Roosevelt. A public opinion poll by Literary Digest predicted a win for Landon by a 3 to 2 margin. However, Roosevelt won the election by a large margin.
Source: Quora
The faulty prediction occurred because of the sample included in the poll. Literary Digest surveyed people who owned telephones, automobiles or were members of any elite clubs. The opinion of the wealthy class influenced the outcome of the poll. The poll did not consider the voice of the working class, who represented a significant share of voters. George Gallup predicted a victory for Roosevelt for the same election by surveying a smaller sample of about 50,000 people.
For a statistically significant result, you need to test your variants across a sample with less variation. When you present your page to visitors representing a part of the total population, the engagement and behavior will be close to average. When you have a larger group with higher variation, you simply end up compounding the randomness.
Now that you have two variants to test, how do you decide the number of visitors you need and the duration of your A/B test? How can you determine if the result is statistically significant or not? We’ve got a free tool to help you out. But before you use it, here is a refresher on everything you need to know to use the tool effectively.
This sample size calculator helps you determine the number of visitors you need for your A/B test to get conclusive results. To determine the sample size, you will need:
This is the current conversion rate of the page you’re testing. The data required to calculate it can be obtained through an analytics tool like Google Analytics. The baseline conversion rate is calculated from the number of successful actions (goal) on your page divided by the number of visitors your page received.
The minimum detectable effect is how big or small is the change you want to detect. You will need less traffic for a big change and more traffic to detect a small change.
If your current baseline conversion rate is 20% and you want to increase it to 22%, the MDE here is 10%. If you set the MDE to 10%, your test will detect changes in your conversion rate outside the range of 18% to 22%.
The experiment’s statistical significance level indicates the risk tolerance and confidence level. For instance, an A/B test with a significance level of 95% implies that if you have a winner, there is a 95% certainty that the result is not due to a random error.
The default value set in the A/B testing calculator is 95%. The ideal statistical range can be between 80 to 99.
Once you input the baseline conversion rate, minimum detectable effect, and the statistical significance, the tool will specify the number of visitors you need for a successful A/B test.
For a baseline conversion rate of 20%, an MDE of 10%, and a statistical significance of 95%, you will need 20,800 visitors to detect a change. After 20,800 visitors, if your conversion rate is above 22%, then you have a winner.
If your variant gives you a conversion rate below 22%, you can stop the test and try different variants.
After you input all the information, the A/B test calculator will give you the optimal duration required to run your test.
Now that you have determined the sample size for your A/B test, you can focus on how long you should run it.
In addition to the baseline conversion rate, the minimum detectable effect, and the statistical significance, you will also need:
After you input all the information, the A/B test calculator will give you the optimal duration required to run your test.
When it comes to how long your A/B test should run, there is a sweet spot.
Run it too short, and you will not have enough data to back your winning variant. Run it too long, and you risk change in behavior and redundancy to affect your results. So what should be your ideal test duration?
Minimum 1-2 weeks |
Maximum 4-6 weeks |
---|---|
Running it for less than a week will give you a biased result affected by a day-to-day variation. |
When you run a test for a longer time, the conditions change. |
Helps you eliminate any false-positive results. |
Helps you eliminate repeated visitors who can influence the result. |
For example, people shopping on the eve of a holiday cannot be used to model their behavior on other days. |
Most users clear their cookies once a month, which means your sample size will have many repeated users than unique visitors. |
Now that you have the sample size and duration, it's time to run your A/B test. After you get a winner, it is essential to determine if the conversions on the variants are statistically significant or not.
This is where the significance calculator helps.
First, enter the conversion rate and the number of visitors for the variants in your A/B test. Then, the calculator will give you the variant that fared better and how confident you can be with the results.
If you’re new to A/B testing and your business doesn’t attract a lot of visitors, there are specific areas that you can optimize to create the most impact. How do you narrow them down?
Here are a few ways to tap into your target audience and optimize your webpage.
It is vital to keep a pulse on your customer's experience, straight from them. When customers have an opportunity to share their feedback, you can find out their pain points. This helps your team come up with solutions that address those pain points.
During the pandemic, Western Union turned digital to boost contactless money transfer. How? By listening to what their customers were looking for. It helped them focus on two things: digital convenience and staying safe through contactless funds transfer.
Freshmarketer lets you gather your customers’ opinions through polls and feedback. Create a poll, enable it to pop up on your website, and make it easy for your visitors to submit their responses.
There could be several reasons why your visitors do not hit the sign-up button and instead drop off on your homepage. During a time of uneven shifts, surveys help you identify what your prospects are looking for.
Today online surveys are much simpler to conduct and provide you results within a shorter span. When you optimize your website/product based on your findings, you draw a lot of visitors to your business. Consequently, this can cause a spike in engagement and result in more conversions.
Customer feedback is a good place to start when you want to enhance customer experience. But what if you do not have access to enough feedback? Heatmaps are a great way to find out the segments of a page that get a lot of engagement and the ones that don't.
Heatmaps give you a visual representation of all the clicks, scrolls, mouse movements on a page. The real-time insights help you optimize your page by improving areas that receive poor engagement.
For churned users, a customer-exit feedback survey is a great opportunity to gather opinions. It helps you identify key areas that could prevent users from leaving your business.
Exit surveys help you mine insights of a user's complete journey, right from why they signed up. These historical findings help you uncover opportunities like never before. It gives you the opportunity to re-imagine your website and offer your visitors a transformative experience.
Your website is your best digital marketer and salesperson, all packed into one. When you optimize your site, the best version must replace the current version. Statistically significant data help you make data-driven decisions. The right data also ensure that your optimized page performs better and brings you better leads than before.
With Freshmarketer’s A/B testing calculator, find out the sample size and duration needed for your A/B test. Analyze how your variants performed and determine if the results are statistically significant. Get accurate results without any manual calculation or extensive monitoring involved.
Sorry, our deep-dive didn’t help. Please try a different search term.