Successful SEO measures help land you a spot in the top ranking positions of the search engine results pages (SERPS) and increase the influx of visitors to your site. But what happens when only a few visitors are actually buying your products or subscribing to your newsletters? Despite good traffic, the web project still isn’t as successful as it could be. The practical and effective solution to...
Whether it’s designing newsletters or online advertisements, marketers are constantly trying to improve their online marketing methods to appear more user-friendly to potential customers. Ultimately, they want to optimise all possible online marketing opportunities to obtain more conversions, which results in more profit for the company. It doesn’t matter if you’re optimising a whole website, individual functions, or individual landing pages: A/B testing has proven to be a good means of achieving an optimal web design over time.
A/B testing: the basic principle
A/B testing (also known as 'split testing') is a method used to compare different versions of a website, or its individual elements and functions. Here the original version of a website is usually tested against a slightly altered, improved version, to see which fairs the best. The target group (i.e. the website visitor, or the user) is divided into group A and group B. A different version, (version A and version B) of the test object, (i.e. the advert) is shown to each group; their reactions are recorded, and then compared. The desired reactions have been previously defined by the website operator. For an advertisement, the desired reaction could be a click or a conversion. If it is a landing page test, this might be a download or newsletter registration.
Why is A/B testing used?
Website A/B testing is used in various disciplines of online marketing. It’s a matter of personal preference what you decide to test: whether it is the whole site, single elements, the wording, or just the colour scheme. Unlike multivariate testing, split testing only tests a hypothesis (i.e. a variable). By comparing the two versions, the following aspects can be tested and optimised:
- Web design
- Revised features and functions
- Landing pages
- Website elements such as call-to-action (CTA) buttons
- AdWords advertisements
You can try out as many different usage scenarios as you want. Here are three examples:
- An online retailer notices a rise in visitors cancelling their shopping baskets and not going ahead with their purchases. A possible solution would be to emphasise individual elements in the purchasing process and then test the new, improved version against the original.
- A web service wants to show targeted advertisements for one of the products on offer. They can test two different versions of the ad by using a different keyword in both of them.
- A blogger that finances their work through advertising (i.e. a publisher) is looking to increase their visitor numbers. They can test different titles for their articles, or vary the use of images.
Website A/B testing: step by step
You shouldn’t implement any A/B testing without first coming up with an in-depth strategic plan. Many marketers adhere to a strict basic principle:
1. Identify problems
An optimisation can only be carried out if there’s something to improve. The first step, therefore, is to identify the problem. This could be, for example, that a button isn’t being clicked on enough. Once the problem has been detected, the goal is obvious. In this case, it would be more clicks on the CTA button.
2. Research and collect ideas
Before you formulate a hypothesis, you should acquire some background knowledge and research accordingly. It’s easy to claim that a blue button would work better than a red one, but without empirical evidence this theory makes no sense. It is useful to look at some studies in advance e.g. research that analyses the influence of colour use on user behaviour. In addition, you may also get some good tips on which elements can be modified.
3. Define hypotheses
You can formulate the hypothesis once you have an impression of the results of your research. This could be: a yellow CTA button will result in a higher click-through-rate (CTR). Or did you change the position of a menu item and predict: this menu item will be easier to find on the newer version than the old one?
4. Test phase
Create two versions of the site that you want to test e.g. version A with a blue CTA button, and version B with the new, yellow CTA button. Split testing enables both versions to run against each other in the test. This can happen at different times or through different URLs. With A/B testing software, it’s possible to forward the user at random to one of the options.
5. Analysis and report
If the test has reached a sufficient number of samples over a certain time period, evaluation can take place. If it turns out that the click through rate has improved significantly with the yellow CTA button, this version will now become the new 'original'.
This sequence can be repeated as often as you wish: in the next step, you can test whether the position of the CTA button on the website has an additional impact on the CTR. Essentially, you can test even the smallest element of a web page – as the website’s operator you have full reign in what you decide to test.
Pros and cons of A/B testing
There are many benefits to A/B testing for online marketers. It enables a subjective comparison that takes place independently of your own point of view, focusing instead on the view of the target group. Thanks to numerous (some even free) testing tools, you can perform a split test without having any prior technical knowledge, and the test will still deliver clear results. These results can be implemented immediately afterwards.
On the other hand, testing is only useful when comparing individual elements with each other. If you change lots of elements at once it defeats the purpose, since you don’t know which change caused which outcome. There’s always a danger of overwhelming or confusing the user if you keep making changes and then reverting back to the original. So with primarily new customers, it is not advisable to carry out these tests. Finally, there is the question of statistical significance. For small sites with little traffic, it is difficult and time-consuming to obtain figures that are actually significant.
Pros and cons at a glance
|• Subjective comparison||• Only one hypothesis per test|
|• Reflects the interest of the target group||• Confusion among users is possible|
|• Easy implementation thanks to testing tools||• Statistical significance for small sites is difficult to achieve|
|• Clear analysis is possible|
|• Immediate implementation of results is possible|
There are numerous testing tools on the market that enable you to carry out split testing. One free option is 'Content Experiments', which you can use through Google Analytics. The range of functions that this Google solution offers is smaller than that of programmes like Optimizely, which for a monthly fee offers a relatively easy testing tool for small and medium-sized businesses.
A similar service is offered by Kameleoon, which is very intuitive and easy to install. Kameleoon also offers a free freemium account for up to 2,500 visitors per month, making it a good entry-level solution for small websites.
A slightly more expensive service (due to additional features such as heat and click maps), is the Visual Website Optimizer. For large businesses that want to use website A/B testing, there are extensive options such as OpenText Optimost or SiteSpect, but these come with a heftier price tag.