How does A/B Testing Work?
A/B testing is deceptively simple. In a nutshell, it is a test that pits two versions of a web page against each other in a battle for supremacy. The versions are shown at random, and whichever one customers like the best is the winner. It is an experiment containing two variants (at its most basic level): the control and the variation (also known as A and B…surprise!). Here’s how a basic test works works:
- You create a variation with the desired changes to your content
- You create the experiment and run the variation against the original
- 50% of your visitors are randomly shown A and the other 50% B
- The performance of each is measured over a set length of time
- Whichever one performs better (for example, a higher email signup rate) wins!
It doesn’t matter if your goal is to get people to sign up for an email newsletter, purchase a product, donate to a cause, or adopt a shelter dog, your site is meant to gently nudge visitors in that direction. A/B testing allows you to optimize your site to do just that. It is a powerful tool for making informed decisions before implementing major (and therefore risky) changes.
Sign me up! A simple A/B test example
An online marketing company is desperately trying to figure out why so few of its visitors are signing up for their services despite their promotion of a free trial. Since the call to action (CTA), the part of the content that suggests an action for the visitor to take, is so important for getting customers to sign up, they form the hypothesis that the text isn’t driving the point home and therefore doesn’t compel them to take the free trial. They decide to run an A/B test.
- They create a variation where the CTA says “START YOUR FREE TRIAL NOW”
- They set up the experiment and set the original as variation A and the new adjusted page as variation B
- They begin the experiment by showing half of their visitors A and the other half B
- They evaluate the performance of the variation against the original and can use the data to make the decision of whether or not to keep the new CTA, or, if they aren’t satisfied, they can use the information gleaned to further refine their CTA and try testing again.