While the term “A/B testing” has become popular in the last couple of years, the discipline as a whole remains largely underutilized. If you manage an eCommerce store you understand the importance of optimizing your conversion rate, and A/B testing is one of the best strategies to help you generate more revenue. This post aims to give you a bit of information about A/B testing, its importance, and how you can effectively utilize A/B testing with your WordPress eCommerce store.
A/B testing, for those unfamiliar with the term, is a process in which multiple versions of the same webpage are simultaneously displayed to visitors. For instance, in an A/B test with two versions (A & B), 50% of the traffic would see A and the other 50% would see B. The A/B testing platform then records the user’s interaction with the page. You can monitor the results and use the findings to determine which of these tests wins.
When considering whether you should A/B test, it’s extremely helpful to understand the ‘Why’. I’ve been A/B testing websites for about 3 years and would like to offer you the top three reasons I choose to A/B test.
1. I want to know if it works
I sell (mostly) WooCommerce plugins at wpovernight.com. During the spring of 2014, we ran a test to see if redirecting the user to checkout immediately after adding an item to their cart would boost conversion rate and revenue. The belief was that easier access to checkout would encourage more people to purchase.
Since most people only purchased one plugin at a time, this seemed like a reasonable hypothesis. At the same time, we had 5 products, so we wanted to be sure to test because we honestly had no idea what kind of impact this would have. We gave it a try and here’s what we found.
First Goal: The Checkout Page
20% more visitors made it to the checkout page. This wasn’t exactly a surprise, as we were automatically redirecting anyone to checkout who had just added an item to their cart. So far, everything looked good.
Second Goal: Making a Purchase
Here’s where things got a little more interesting. When I looked at the results for our “Made a Purchase” goal, I found that users who were automatically directed to checkout had a conversion rate 8.3% lower than users who clicked the checkout button. This result was surprising to me. At the very least I thought that the conversion rate would at least be the same as before. I certainly did not expect it to be lower.
Third Goal: Revenue per Visitor
The real moment of truth was when I saw the impact on revenue. Redirecting the user to checkout after add to cart had a significant negative impact on revenue, to the tune of an 18% loss. For me, this was evidence enough that we needed to keep the user on the page after adding to cart and let them go to checkout when they were ready.
2. I want to know how well it works
I recently ran a test that I was almost certain would have a positive impact, but I wanted to know how big the impact would be. Our website uses Easy Digital Downloads, so I built a simple lightbox that showed the user plugins that were not already in their cart after they added an item to their cart. I was so excited about this test that I stayed up late one night to put it all together and launched it with great expectation.
To my surprise, the whole experiment was basically a wash. I had a slight increase in users adding an item to their cart, a slight increase in people visiting their cart, and a slight decrease in people making a purchase. There certainly may have been other intangible benefits (i.e. exposing customers to other plugins they later came back to purchase), but there was no measurable impact. Even when you’re certain a change is going to make a positive difference, it’s always good to test, if for no other reason than to confirm how awesome you are.
3. I want to know how I can make it better
Even if your A/B test doesn’t give you the results you’re looking for, it typically gives you helpful information that can point you toward another iteration of the same test, or give you completely different ideas to test.
I recently started a test to display the number of downloads we’ve delivered in the banner at the top of our site. The question was, will showing people our 130,000+ downloads make them trust us more? I created the banner seen in the image below and ran it against our standard message (same message shown just without the download number portion).
It performed exceptionally well, showing a 10% increase in purchases. See the image below for all the details.
I was excited at the possibilities of the test so I updated the number to 150,000, since we’d recently passed that milestone, and kept testing. As I watched over the next couple of weeks, the conversion rate steadily dropped. When I looked closer I found that the 150,000 actually had a negative impact instead of a positive one.
There are a variety of reasons why these results could occur. Perhaps 150,000 may seem like a fake number to some people, while 130,000 seems more real. Either way, this discovery was huge and has led to another round of tests to try to determine how to best display the download number.
Pick the Right Tool
The two most popular A/B testing platforms are Optimizely and Visual Website Optimizer. I personally use Optimizely and have done so for the last three years, so I can’t speak to the quality of VWO. Optimizely has two plans – Free (sufficient for most small eCommerce sites) and Enterprise. VWO has several more pricing options and may be better for a medium to large eCommerce site. VWO’s pricing starts at $9/month.
Understand Your Traffic
When A/B testing you need to make sure you understand your traffic. If you only get a few hundred visitors per month focus only on tests that will have a significant impact. You really need 100+ conversions on any single variation to be able to determine a winner, so think about how long it would take you to get there at your baseline conversion rate and determine if you want to wait that long for a result.
Understanding your traffic also means that you need to know who is shopping at your store and run tests accordingly. Are your customers typically stay at home moms? Then maybe you need to test a Pinterest button. Are they teenage boys? Then maybe Snapchat or Instagram is more appropriate.
Make Your Tests Count
If you’re just starting out with A/B testing, then test things like button colors until you get the hang of it. The real value comes in testing things like, social icon placement, related product placement, coupons at checkout, etc. Go for the tests that make a big difference first, and if you have extra time consider testing things like copy change and colors.
A/B testing is an extremely useful way to boost your conversion rate, and hopefully this post will give you something to get started. If you have any questions please feel free to comment below.
So for me the question is not whether A/B testing is useful or not, it’s simply “how do you do it?”. I’d love to see a follow up post on how to set this up for WooCommerce.
Hi Dan, that sounds like a great idea! I personally have only used Optimizely for my A/B testing but I could definitely put together a pretty thorough guide for using with WooCommerce.
Hi Dan! I’m happy you asked how this one can A/B test a WooCommerce setup. I’m one of the co-founders of Nelio Software, a company that is currently working on a native A/B testing service for WordPress. Right now, our tool is able to test almost all standard WordPress elements: that is, pages, headlines, widgets, or themes (menus will be available soon). As such, there are a few tests you may be able to run in WooCommerce with Nelio. Please, feel free to try Nelio A/B Testing for free and discover how it works!
During 2015, we’ll start integrating WooCommerce-specific tests. If you’d like to work with us and help us to define the set of e-commerce tests you’d be interested in, we’ll be more than glad to help (just contact us)!
Hi Jeremiah,
Interesting post.
I’ve run a number of A/B tests over different platforms (never on WP though) and something I struggle with is letting go of the test once it has run its course and a statistically significant winner is selected.
For example with your test about forwarding the user to checkout once an item has been added, I would spend days afterwards trying to work out (without an additional test) what exactly it was that put a customer off when they got pushed straight onto cart. Evidently it’s not feasible asking them, and also if it were, the data would be qualitative rather than quantitative.
I guess it comes down to a personal flaw of wanting to know EVERYTHING.
No questions, just a passing comment.
Cheers,
Matt