Formulating A/B testing hypothesis that impacts your conversions

Making waves with a website is harder than ever. Competition is fiercer than before, prices are going up, and with data breach after data breach making the headlines, users are less trusting. It’s a daunting set of challenges to face, and the knowledge that every element you change on your website can give your competition the edge or cripple your conversions doesn’t make it any easier.

But take heart. Because while it’s true that the smallest changes to your website can have significant impact, they can also be positive. A well-structured and researched A/B test can deliver results that help you land more customers, improve their experiences, and leave your competition in the dust.

At least, as long as you’re doing it the right way.

How to A/B test the right way

A/B tests need a few elements to be successful: traffic, variables, variants, research, and a hypothesis.

You could run a test without a hypothesis. It’s possible to simply pick a variable, create a variant, and see what performs better. It’s similar to creating a new dish in the kitchen.

But while that has some romanticism to it, the results are also similar: if you’re lucky, you don’t waste ingredients and discover something tasty.

The more likely scenario is that you end up losing money and time on the dish because it takes twice as long to perfect it without a recipe to guide you.

And whereas a learning experience in the kitchen can be justified as the journey outweighing the destination, website conversion rate optimization is always about the destination. The destination is conversion.

Here’s the recipe for brewing your hypothesis so you always know where you’re going and how to get there during your A/B test.

5 Steps to creating an A/B testing hypothesis

If your last experience with a hypothesis was your eighth-year science class, you’re not alone. That’s why we’ve created a simple, step-by-step guide to take you from hypothesis inspiration to synthesis.

1. Look at past data

In an ideal world, all A/B tests would be born from your own data and the results of prior tests, analytics, and heat maps.

But if you’re new to testing or don’t have substantial traffic patterns yet, that isn’t always an option. (By the way, you need at least 5,000 weekly visitors to split those variants between, experts say. Less than that and your test could take months to yield statistically valid results.)

Still, you shouldn’t create a hypothesis from thin air. Even if you have a strong idea already, you need background research to support it before you risk your current conversion rate.

After all, best practices dictate splitting live traffic evenly between test variants, so you want to make sure any change you impact won’t significantly underperform the control.

So if you don’t have your own resources to pull from, look through the results of others — especially websites that have a similar niche or demographics as yours. Case studies are a great place to start.

Alternatively, if you prefer a more direct approach to data gathering, you can also do some reconnaissance on social media by tracking relevant hashtags, but this is a time-intensive option and isn’t as likely to yield a strong research foundation as more empirical studies.

2. Decide on a variable

After you’ve sourced your background research, you should have a clearer idea about the variables to use in your A/B test.

The key is to decide on a single one. While you can create multiple variants of the same variable, overloading your A/B test with variables makes it difficult to pare down the results.

If you’re testing multiple variables and variants, it’s even worse. Sure, you might get better results, but there’s no way to determine which specific elements were responsible for them.

That’s what multivariate testing is for, and it has a much higher traffic requirement for statistical validity. It’s also much more suited for small, incremental changes than major renovations.

Beyond the limitations mentioned above, how will you know if the variant that performs best reached its maximum potential when you have multiple variables in an A/B test? What if it could perform better without the second variable?

To find out, you’ll either have to run additional tests or guess. Guessing is never a good business strategy (unless you’re in the psychic business), so keep your variables, and the hypotheses that go with them, distinct.

3. Align with trackable consequences

Research and variable selection is only part of the process. After you’ve figured out what to measure, you need to determine how to measure it.

Ask yourself what results will indicate a performance change, and scrutinize them carefully. Create a list of potential key performance indicators (KPIs) that you can use to align with your goals (more on that in a moment).

And be careful to avoid vanity metrics at this stage. Stating that “changing the widget placement will increase the number of social shares” may be a trackable result, but it isn’t one that necessarily correlates with your bottom line.

At least, not if you’re trying to do something beyond expanding brand awareness. Getting your name out there may help you land a few more leads, but it’s not going to have half the impact as targeted keyword research or conversion optimization.

This is the gist of it:

Whatever metric (or metrics) you choose to track, it needs to have consequences that enable you to take some form of quantifiable action.

Once you’ve got that down, proceed to the next step.

4. Pair results with goals

Everyone in business has the same goal, even if it’s not their only purpose: to generate more revenue.

Unlike the results step, your goals can be slightly more open-ended, but they should still point to a concrete impact that changing the variable will have on your business.

So rather than setting your goal as increasing profits, give it a more quantifiable condition such as, “increase profits by 3% over a six month period.” Consult your background research to keep your goals realistic.

Did another company have similar results when they changed the same variable and tracked the same metrics? If so, you’re good to go. If not, it’s okay to give it an estimated number that’s significant to your business and creates a clear cutoff point for testing.

Just err on the side of caution if you go this route.

Under-projecting your revenue and then having profits over-deliver is a far safer bet than over-projecting and under-delivering.

Otherwise, try to align your A/B testing goals beneath both your business and website goals.

If it’s supporting both, there’s a better chance that it can improve both, as well.

It’s also helpful at this stage to make sure your testing platform can track your goals and KPIs. Freshmarketer can track multiple pain points and provides easy visualizations for interpreting the results. Give it a try today.

Now, there’s just one last step.

5. Generate your prediction

You can rejoice. You’re finally ready to formalize your hypothesis and start putting it to the test. Formulas for this vary, but an easy way to frame hypotheses is to put them into definitive statements.

I.e., something like “Changing the sign-up sequence from three screens to two screens will increase completion, thereby putting more leads into the funnel.

This is expected to lead to a 7% increase in conversions over the next calendar year because another company with similar demographics and traffic reported similar results.”

Alternatively, you can hedge your bets more while still generating a strong hypothesis by changing it into the “if, then, and because” formula.

If we reduce the sign-up sequence to two screens instead of three, then more users will finish signing up, and conversions will improve by 7% over the next year. This is anticipated because a competitor with comparable traffic had similar outcomes.”

However you choose to engineer your hypothesis, as long as you have background research, a solid variable, actionable consequences, and have aligned the goals of the test with your business needs, it’s sure to be a strong one.

Bottom line: start synthesizing your hypothesis and testing out your ideas. Even if your hypothesis doesn’t pan out, you’re sure to learn something for the next test to make an even more actionable hypothesis next time.


Businesses need every last advantage they can scrape up over their competitors. Getting those advantages is more daunting than ever before, but it doesn’t have to be.

If you apply a little scientific method and run A/B tests, you can uplift yourself from the sea of sameness and identify the best possible strategies to convert your audience into paying customers.

The secret to doing that better than everyone else? Not skipping the basics and creating an A/B testing hypothesis that has actionable metrics and results.

Start by looking at prior data, then narrow down on a variable, align your hypothesis with trackable consequences, and set your goals.

Finally, generate your hypothesis and put it to the test. Your hypothesis might not prove true, but if you base it on these steps, you’ll still learn something, and your next testing session will be that much stronger for it.