Chris Gray
14 Nov 2017
2 min read
A/B Testing: We Can Do Better
A/B testing is something we come across more and more, typically in organisations with greater digital maturity. While I love A/B testing, too often I see firms using a shot gun based approach lacking genuine insight to create meaningful hypothesis and hoping to hit a random target.
A/B testing is something we come across more and more, typically in organisations with greater digital maturity. While I love AB testing, too often I see firms using a shot gun-based approach lacking genuine insight to create meaningful hypotheses and hoping to hit a random target. While this sounds like a rant, I think it is important to consider our ways of working in the digital space and ensure that we gain maximum value from our efforts.
So I was interested to come across this article on the Marvel App blog with a wonderfully click baitey title 'A/B Testing – You’re Doing It Wrong'.
The article includes some interesting findings:
Only 10% of experiments resulted in actionable change — formally releasing a new version of a page or feature
50% of teams could not make decisions from A/B testing experiments due to inconclusive or poorly measured data
This sums it up nicely: “Companies may be running A/B tests too frequently for too little time, contributing to a high failure rate that makes A/B test results less valuable and meaningful.” Note that the above data is from small and potentially non-representative sample or as the author states "This next dataset is from a qualitative and quantitative A/B testing survey of 26 A/B testing practitioners conducted from May 1 to May 30 in 2016 (Northwestern, IDS — Justin Baker, 2016). While this is not the end-all be-all of surveys, it could still give us some meaningful insights". While the sample may not be perfect (when are they?) the findings reflect my personal experience.
I believe that at the heart of the issue of poor A/B testing is a lack of data triangulation to understand the user experience and use this to create hypotheses that can be tested. Some of the most meaningful A/B tests that I have seen were when qualitative data was used to understand a problem and then a solution was identified and tested.
We find that combining AB testing with some form of qualitative research (such as journey mapping research) is ideal for understanding the 'why' of an issue, and also for informing test hypothesise.
What are your thoughts?
Chris Gray
Managing Director & Principal Consultant
Chris is a leader in the Human Centred Design field with a 20+ year track record of improving customer interactions with some of Australia’s largest organisations. He is a strategic thinker who brings a calm and considered approach to tackling complex problems. An accomplished workshop facilitator, Chris excels at engaging with senior stakeholders and guiding projects to success. Chris has expertise in user research, service design and embedding Human Centred Design within organisations.
Latest posts
Bao Nguyen
19 Oct 2024
Designing with Cognitive Biases in Mind
As UX designers, it is important to consider cognitive biases when creating experiences. A cognitive bias is a common pattern of deviation from normal or rational judgment. They tend to occur when people process and interpret information around them. Thes...
Bao Nguyen
4 Sept 2024
Sustainable UX Design: Crafting Experiences for a Greener Future
Sustainable UX design is crucial for reducing environmental impact and promoting sustainability, with UX designers playing a pivotal role in shaping a greener future.