Skip to the content

Why You Should Run A/B Tests

What it really boils down to is that there is no perfect way to determine what it is your customers are looking for without asking them yourself.

Improving the customer’s experience with a site requires changes, but deciding which changes those need to be is difficult without first obtaining data. A/B testing allows companies to make careful, data-driven changes to a website that help them better understand what makes their customers tick. Basically, split testing is the only way to be certain about which changes will improve success metrics.

A/B testing allows you to minimize risk and maximize gains. Not only will it show you if one variant is working better than another, it will help stop the bleeding if something is a complete disaster (since only a fraction of consumers will see it and poorly performing pages are phased out).

There is a near endless stream of successful case studies where companies have increased desired metrics with simple tweaks to web copy, placement, or design.


Advantages and limitations of A/B tests

The most widely used and reliable method of evaluating the performance of two variables, A/B tests are used continuously by companies to further refine their web sites. Their very nature (being binary: A vs B) makes it relatively easy to collect sufficient data in a short time. The advantages are clear:

  • Simple to implement
  • Low barrier to entry
  • Early feedback
  • Reliable results (hard to argue with results if a variation comes out miles ahead

A major plus is that you can just build prototypes of complex changes and not waste time implementing them before you’re certain the they’ll get the job done. With A/B testing, you can initiate small scale testing, gather some data, and scale it from there. It’d be much harder to convince someone to initiate large-scale testing that takes months or more to provide tangible feedback.

It isn’t without limitations, however.

  • Interpreting the results isn’t always straightforward. It’s up to you to infer why something did or did not work (E.G, variant A outperformed variant B, but you don’t know why)
  • It can only tell you how users are reacting to your site, not whether or not they are the “right” or best users to be showing your site to
  • Tests must be subject to the same variables to give reliable feedback
  • Preparing variations requires work. More tests = more content, possible development, and further maintenance (maintaining a client’s website is extremely important, by the way)
  • Accuracy diminishes as the number of variables increases

There’s always room for improvement no matter how successful your website is, but implementing changes to a page is often expensive, time-consuming, and risky.  Making a change on a hunch in the hopes of increasing conversions is like taking a shot in the dark. A/B testing allows companies to roll out prototypes of the proposed changes and see which ones work best by evaluating their performance with the people who matter the most: your customers. By implementing changes (each with varying difficulties) based on empirical evidence, agencies can hone websites down to exactly what works.

Knowing what A/B testing is and how it works is great, but the next step is to find out how to approach them mentally, run them properly, and evaluate the results. Find out how to do exactly that in part 2.


Continue to Part II: How to Run and A/B Test and Evaluate the Results