5 Biggest A/B Testing Mistakes to Avoid – Improve your Digital Advisors

A/B testing, evaluating two variants against each other, is the most effective way to increase the usefulness and profitability of digital applications with little effort.

That’s why, here at SMARTASSISTANT, we’re big on A/B testing. It allows for immediate feedback from your target audience on how well you are advising and guiding shoppers to the right products.

Much like any other sales associate, a digital advisor requires continuous training so that it can give the best advice and keep up with an ever-changing audience. A/B testing is this training.

In this article, I’ll share the biggest A/B testing mistakes to avoid.

1. Not Utilising A/B Testing

First off, the biggest mistake you can make is simply not A/B testing.

Many companies have great excuses as to why they’re not A/B testing or simply don’t see the need for it. Their solution is live, customers are using it, project done.

Not quite.

Customer habits and expectations are changing rapidly today, as are products and technologies. Whenever you are in the business of serving people, you can’t afford to stand still when everything around you is changing. With A/B testing, you are able to detect these subtle changes, make adjustments, and evolve accordingly.

In my time as Conversion rate optimization consultant, I have seen that simple but smart adaptations based on A/B testing results can increase the conversion rate by over 9% or increase revenue by 15%.

Neglecting the need to A/B test means leaving money on the table in almost every scenario.

However, there are a few acceptable excuses as to why you cannot yet A/B test:

  1. You haven’t addressed your site/advisor usability yet.
    Without this base understanding, you won’t know what the metrics you are looking at mean.
  2. You’re not clear on analytics (and have no one on board that is).
    At the end of the day, if you don’t know what you’re looking at, you can’t do anything meaningful with it.
  3. You have little traffic.
    If you’re making a handful of sales a month, data swings are common. It makes testing irrelevant, as you won’t be able to extrapolate anything from it.

These are the only situations that should hold you back. Once you’ve spent some time addressing them, you can then begin confidently A/B testing.

Here’s how to go about starting A/B testing:

  1. Make a list of all items you know are worth testing.
    Questions, Answers, CTA, Images, Q&A flow etc.
    Tip: The wording of the questions can play a huge role in how customers respond to your advisors.
  2. Create a concise testing roadmap and define a hypothesis for each separate test.
    Blindly taking a stab at changing something will not yield meaningful results. Hypotheses define your assumptions and will help you focus.
    Tip: You will always get the best data by only having one hypothesis per variation.
  3. Define the measure of success.
    Without knowing your goal, you’ll never know when you reach it. Testing just “to see what the impact is” is not the way to go.
    Tip: When evaluating digital advisors, useful metrics to consider are the conversion rate,  click out success rate (share of sessions that contain a click out), bounce rate (share of sessions that bounced after opening the advisor) and click-through success rate (share of sessions that contain a final page). It is also very important to choose only one key metric to improve per A/B test.

2. Focusing On Small Details First

No detail is too small for A/B testing, but there is such a thing as putting the cart before the horse. Addressing small conversion details before the potential glaring issues would be doing just that.

The color of your buttons, for example, isn’t important enough to test first.

The wording of your question and answers, integrated images, the position of buttons, and the fluidity of the experience are all far more notable elements to be focused on.

A/B testing them can yield some interesting insights and impressive results.

Here’s how to avoid this mistake:

  1. Create a wireframe of your advisor and highlight the testable elements.
  2. Narrow down which impacts decision making the most.
    As you test, you’ll realize some elements are more or less important to your audience than others. Make notes of these!
  3. Estimate the traffic you’ll need to reach a statistically significant uplift.
    Ensure your sample size is large enough to draw conclusions before you launch the test and before implementing any changes.
  4. Use information gathered by previous tests to ensure the most impactful elements are tested.

3. Changing Multiple Variables at Once

When not handled properly, changing multiple elements at once can completely nullify your A/B testing efforts.

This is because you won’t be able to attribute your test results to any particular change.

For example, if you change the first question in your digital advisor and add new images at the same time, you won’t know which adaptation led to changes in user behavior.

Without clarity, you can’t proceed with the data you’ve gathered.

The Simple Solution

Test one change at a time. Refine that change. Finalize that change. Then move to the next element and continue the process. It’ll take longer, but it’ll give you considerably better data to base decisions off.

4. Not Using Large Enough Sample Sizes

The belief that A/B testing with small sample sizes can produce useful data is one of the biggest misconceptions surrounding the testing culture.

The hard truth is that even if the results of a small sample size are overwhelmingly one-sided, it means little. Small sample sizes make it easy to get hit with data anomalies.

Evan Miller created an A/B test tools, the Sample Size Calculator to help you calculate the optimal sample size for your A/B test. It tells you roughly what sample size you need before declaring a winning or losing variation.

5. Not Digging Deeper Into Test Data

A/B testing produces much more data on your advisor performance than what’s apparent at first sight.

A deeper look into the data may show some other points worth considering, such as that returning visitors respond better to certain questions in your advisor than new visitors. Or the fact that people from different regions may react differently to the use of certain images in advisors.

This kind of information and similar signals are incredibly important when it comes to interpreting A/B test results.

How to stop missing important data:

  1. Understand that not every visitor is equal.
    Segmenting your visitors based on certain criteria (e.g. new user vs returning visitor) will enable you to take a deeper look into your testing results.
  2. Review what performs well and what doesn’t with different user groups, using segmented analytics.
    Adapting certain aspects of your advisor based on user behavior is what A/B testing is all about. This requires taking into account who your users are and spending time to understand why certain variants perform better with certain user groups.

I hope this article was helpful. All that’s left is to begin your journey of A/B testing and improving your sales.

However, if you’re still unclear or struggling – feel free to contact me. I’m always happy to answer questions on the topic.

Subscribe to our FREE newsletter and get new articles straight to your inbox!

Join over 70,000 people who want to stay inspired.

Scroll to Top