NEWS / INSIGHTS

26 June 2020
In the post-pandemic world, A/B testing can give you precious insights into your customers’ changing preferences, says Helena Heno, Data and Analytics Consultant at Artefact Benelux.

As lockdown ends and “the new normal” begins, there is much speculation about consumer trends. What will they look like in the short- to medium-term? Will they stabilise? Have they changed for good? The truth is no one really knows because the situation is unprecedented.

But one thing is certain: as the world reopens, digital optimisation will be vital for companies to gain a competitive edge and understand evolving consumer needs. And the best way to optimise is through A/B, or split testing. It tells your customers what you’re doing to enhance your product or service, and it shows you what your customers react to best.

A few examples of timely enhancements or offers you might test include transparency about longer delivery times, extending your return policy, no-contact delivery, product disinfection, exceptional discounts, or free trials. 

Why A/B testing is relevant to almost every business

If your business has a website, you should test any changes to its layout or interface. Why? Because your website is the face of your business, and if your customers don’t respond well to the changes, you could lose them.

It’s easy to set up a test with tools like Google Optimize, Optimizely, VWO, etc. Of course, it’s also important to create and test a hypothesis relevant to your business objectives. Then, based on the results you get, you need to decide which version of your hypothesis to keep. But how can you be sure your test was successful?

A brief guide to best practices in A/B testing

1. Research phase

In this phase, evaluate your website’s performance and usability. The goal is to see your site from a visitors’ perspective.

Ask yourself questions to help you understand the user experience:

  • What steps does the user take when they come to the website with a specific goal?
  • What are the main hurdles they might face when browsing for a specific product or service?
  • How does the experience differ according to the device they use?
  • Where do the users drop off in the sales funnel?
  • Where do the users leave the website, and why might that be?
  • Where do the users leave the fill in forms?

Tools that help you to gather facts include Google Analytics for exit pages, search keyword, bounce rate, session duration, metrics per device, and HotJar for screen recordings, heat maps

2. Prioritisation phase

This is when you’ll begin testing your ideas. But testing too many at once can cause ambiguity in the results and even hurt your brand image. A customer faced with too many new features and page changes will find it difficult to navigate a page that was previously familiar.

To prioritise the tests, rank the experiments. Some criteria to consider when prioritising your tests:

  • Minimum Detectable Effect (MDE): what’s the minimum improvement you want to detect over the original variant? 
  • Page traffic: because testing takes time, start with high-traffic pages that help you get results faster.
  • Lead generation importance: which feature do you want to optimise? Is it important for your business impact?
  • Test difficulty: will the test require too much time or too many technical changes? 

3. Hypothesis formulation phase

This is a simple but vital part of testing. You begin by identifying a problem and expressing it in the form of a hypothesis. For example, maybe you think you’ll gain more submissions if the address field in the fill in the form is optional instead of mandatory.

Your hypothesis would then be “Making the address field optional instead of mandatory will increase the number of form submissions.”

You’d test this hypothesis against its null, “Making the address field in the fill-in form optional instead of mandatory will not have an impact on the form submissions.”

This is your A/B test foundation.

4. Testing phase

Once you formulate your first hypothesis, it’s time to set up a test. There are plenty of tools that can help you easily run tests on your website without asking developers to do it. Consider using  Google Optimize, Optimizely or Monetate.

5. Measuring impact

It’s vital to know whether you can implement the changes you test with no mistakes. In statistics, a common threshold is the 95% confidence interval. That means there’s a 5% probability that your results are within a given range by pure chance.

Fortunately, you can easily and correctly interpret test results using optimisation tools. For example, Google Optimize will tell you when a clear leader is found, when you can safely end the test, or if you should keep running the test to make conclusive results. 

6. Learning phase

If your test results prove significant, your logical next step would be to implement the change. But even in case of non-significant test results, don’t consider the test to be over. Integrate the lessons from the test into your overall CRO plan. Use the findings as a gateway to test-related themes. No impact or a negative impact are still valuable results.

A/B testing can save your company time and prevent unnecessary changes that could lead to potential losses.

Interested in digital and data marketing?

Sign up to Data Digest, Artefact’s newsletter, to get actionable advice, insights and opinion in your inbox every month.

Sign me up.

Artefact Newsletter

Interested in Data Consulting | Data & Digital Marketing | Digital Commerce ?
Read our monthly newsletter to get actionable advice, insights, business cases, from all our data experts around the world!