AB testing is great for simple, one-off experiments, like split testing a new hero image design. You begin the test, and once your sample size is sufficiently large and if the difference in performance is statistically significant, then you might roll out the winner.

However, in the real world,  things are constantly changing. The visitors that came to your site during the testing phase aren’t necessarily the same as the ones you are getting today; the devices they use are changing. Customers might be looking for different products as tastes evolve. Even the products you sell might be different, depending on the season or what is in stock.

No matter how well you set up and analyse your test, it’s really hard for your one-off test to account for all those factors, meaning the winning variation you launched could well be generating no more uplift, or worse still, hurting your performance now.  

What can you do about this? If you thought the opportunity was significantly large enough to test in the first place, you should be re-testing. Perhaps you should be introducing new challenger variations as well.

But, re-testing can be costly and expensive. Each time you run a 50:50 split test, you are potentially wasting 50% of your customer visits on a variation that you think will perform worse! Consider the cost of setting up, running, monitoring and rolling out another test, and you can see why many companies simply don’t bother to re-test.

What’s the alternative? At Incisively, we believe a continuous optimisation approach provides a better and less risky way to test and learn in many situations.

Continuous optimisation uses machine learning algorithms that can identify changes in the performance of the different variations being tested, and automatically adjust the amount of traffic being served to each one.

The key difference to traditional split testing is that with a continuous approach, Incisively never declares a "winner"; instead, it constantly adapts to a changing world, so once you begin an optimisation you can keep it running indefinitely.  

As the algorithm gains more experience about how different variations perform, it becomes more confident in which variations to show.  New variations can be introduced and the algorithms will explore these, to see how they perform again.

There are several benefits to this approach:  

  • Setup tests just once, and then let the algorithms do the hard statistics to determine which one to show more frequently.

  • Introduce new variations easily—without having to start all over again.

  • Get more conversions from day one—no need to wait until the end of a pre-determined testing period to start optimising.

But continuous optimisation is not a panacea, and just like in any AB test, you need to ensure it is setup correctly and appropriate for the business purpose.

Contact us to find out more about continuous optimisation and how you can benefit.