When talking to people about A/B testing I've noticed that there are four (stereo) types of mindsets which prevent companies from successfully using split tests as a tool to improve their conversion funnel.
The favorite answer to suggestions for website or product improvements from people from this camp is "we'll have to A/B test that" – as in "we should A/B test that, some time, when we've added A/B testing capability". It is often used as an excuse for brushing off ideas for improvement, and the fallacy here is that just because the best way to test assumptions is an A/B test doesn't mean that all assumptions are equally good or likely to be true.
Yes, A/B tests are the best way to test product improvements. But if you're not ready for A/B testing yet, that shouldn't stop you from improving your product based on your opinions and instincts.
People from this group draw conclusions based on data which isn't conclusive. I've seen this several times: Results are not statistically significant, A and B didn't get the same type of traffic, A and B were tested sequentially as opposed to simultaneously, only a small part of the conversion funnel was taken into account – these and all kinds of other methodological errors can lead to erroneous conclusions.
Making decisions based on gut feelings as opposed to data isn't great, but in this case at least you know what you don't know. Making decisions based on wrong data – thinking that you understand something which you actually don't – is much worse.
There's a school of thought among designers which says that A/B testing lets you find local maxima only. While I completely agree with my friend Nikos Moraitakis that iterative improvement is no substitute for creativity, I don't see a reason why A/B testing can't be used to test radically different designs, too.
Designers have to be opinionated. Chances are that out of the 1000s of ideas that you'd like to test, you can only test a handful because the number of statistically significant tests that you can run is limited by your visitor and signup volume. You need talented and convinced designers to tell you which five ideas out of the 1000s are worth a shot. But then do A/B test these five ideas.
The more you learn about topics like A/B testing and marketing attribution analysis, the more you realize how complicated things are and how hard it is to get conclusive, actionable data.
If you want to test different signup pages for a SaaS product, for example, it's not enough to look at the visitor-to-signup conversion rate. What matters is the entire funnel conversion rate, starting from visitors all through the way to paying customers. It's well possible that the signup page which performs best in terms of visitor-to-signup rate (maybe one which asks the user for minimal data input only) leads to a lower signup-to-paying conversion rate (because signups are less pre-qualified) and that another version of your signup page has a better overall visitor-to-paying conversion. To take that even further, it doesn't stop at the signup-to-paying conversion step as you'll want to track the churn rate of the "A" cohort vs. "B" cohort over time.
If you think about complexities like this, it's easy to give up and conclude that it's not worth the effort. I can relate to that because as mentioned above, nothing is worse than making decisions which you think are data-driven but which actually are not. Nonetheless I recommend that you do use split testing to test potential improvements of your conversion funnel – just know the limitations and be very diligent when you draw conclusions.
What do you think? Did you already fall prey to (or see other people fall prey to) one of the fallacies above? Let me know!