Got a good design? Vibrant images? Stuff people want to know about? Strong CTAs?
You might be surprised to know that even the most eye-catching website elements can fall short of your expectations.
It takes more than design, colors, and calls-to-action to get conversions (or at least some form of attention). Despite all those hours of research, writing, and designing and all the money you pour into your project, it may look like the lights are on, but in reality, nobody’s on your home page.
Using a software or service to conduct an A/B split test can go a long in way in figuring out what’s working on your site and what’s wasting your digital real estate space. People respond to different triggers, and they aren’t always the ones you think. That is, you don’t know what you don’t know when it comes to what others will think about your content.
If you want to ensure your audience will love what they see, split your audience, test a few elements against each other, and let engagement do the talking. Here’s how you can set up your very first A/B split test:
Decide What You Want to Test
Truthfully, you can A/B test any type of content you create, whether it’s on your website or something you’re posting offsite. Some companies make it a common practice to test different elements, such as blog titles or landing page copy. Others only decide to test if their first version isn’t getting them the response they expect.
Either way, it’s up to you how often you conduct A/B testing, though companies who make it a regular practice may find their content more effective as a result.
First, determine if you will be testing onsite or offsite content. If you are testing offsite material, you’ll likely be testing an ad or email. For onsite testing, your test subject could be a multitude of things: landing pages, blog posts, layouts, pop-ups, content upgrades, or another sales-related piece.
Once you determine what you’re testing, ask yourself what you want to know about it. Do you want to see which ad copy is bringing the most conversions? Which image gets more click-throughs? If you should move your CTA button on your landing page or make it a different color?
Some of these things may sound trite, but you might be surprised at how they impact your overall results. There’s a lot you can learn from simple changes if you know how to test them.
Choose Elements to Test
Once you set your goals, choose which elements to test. Here you’ll need a control (something that’s already working) and a variation (the change you want to make that will challenge the control). There’s usually only a small difference between the two.
Determine what your control is capable of. What’s the current conversion rate? This gives you a comparison point to see if your variation performed better or worse than what you’re already doing.
For instance, if you are testing a Facebook ad, you might want to test a new headline, call to action, image, or ad copy to see if one gives you better results than something you’ve already used.
For an onsite landing page, you might consider testing different headlines, images, button colors, button text, button shape, on-page copy, the offer itself, or the visual location of the offer, button, or ad copy. It helps to make a list of all the potential variables you can test, then sort through the list until you determine which elements will actually be tested.
It’s best to choose only one slight change at a time to ensure you know what’s driving the results. If you have an orange button with CTA A on one landing page and a blue button with CTA B on another landing page, you won’t know for sure if the button color or the CTA is creating better engagement.
If you have several items you’re wanting to test, you can expand your A/B test into an A/B/C test, where you compare A vs. B, B vs. C, and A vs. C. This means you’re still testing one element against another, but it helps you expand your research a bit.
However, keep in mind that the simpler you can make your A/B testing, the less interpretation you’ll need. For the most accurate insights, it’s best to limit your options.
Determine How Long You Should Test
It can be tricky to determine how long you should test each element before making a decision. You want to ensure you give your content enough time to be seen, but you also need to know as soon as possible what’s working so you can make the necessary changes and allow it to continue working for you.
It largely depends on what you’re testing to determine how long you need to test it. If you are sending a one-time email, you might have less time for A/B testing than if you were running a week-long campaign on Facebook.
It also depends on how much volume you can collect in a given time. The more responses you get, the better you’re able to gauge the success of your A/B test. Use your baseline to determine the optimal amount of time. For instance, if you typically get 10,000 clicks on a week-long AdWords campaign, find out how long it usually takes to get the first, say 1,000 clicks. Use that time frame as your testing period. Once you see which element got the best response, use it to broaden your campaign.
It’s not usually enough to run a single A/B split test if you want the best results. A/B testing should become a common ritual to ensure you’re making your website the best it can be.
Often times we assume that just because something’s working well we should accept that we’ve got it figured out, but that couldn’t be further from the truth. If you prioritize split testing, you might find that your conversions are higher than you ever could have expected. And if that’s the case, then all your extra effort will be time well spent.
Author Bio: Catherine Tims is the editor at NoStop Content Writing. After receiving her Master’s degree in English Language and Linguistics at the University of Arizona, she taught writing to graduate students at the University of Illinois/Champaign-Urbana.