A/B testing sounds simple but is often run wrong. Learn which elements to test, how long tests need to run, and which tools are worth it.
A/B testing is the most powerful tool in conversion optimisation. Instead of guessing what works, you test two variants against each other and let the data decide. Sounds simple — but in practice, alarmingly often, it's done wrong.
In this guide we'll show you how A/B testing actually works, which mistakes to avoid, and how to get meaningful results even with low traffic.
In an A/B test you show two different versions of a page or element to different visitor groups simultaneously. Version A (the control) is your current page. Version B (the variant) contains a change you want to test.
Traffic is split randomly: 50% of visitors see version A, 50% see version B. After a defined period, you compare the results and know which version converts better.
The key rule: only ever change one element at a time. If you change headline, image, and button colour at once, you'll never know which change made the difference.
Not every test is equally valuable. The biggest levers for conversion rate usually sit at these elements:
The main headline is the first thing a visitor sees. A different phrasing can shift conversion rate by 20-50%. Test:
The CTA button is the moment of truth. Small changes can have outsized impact:
Test different formats and placements:
Form design directly affects inquiry volume:
Visual elements influence the visitor's emotional reaction:
This is where most people make the biggest mistake: they end the test too early. An A/B test is only meaningful once it has reached statistical significance.
Statistical significance means the observed difference between version A and B is very likely real and not due to chance. The industry standard is a 95% confidence level — meaning the probability that the difference is random sits below 5%.
Imagine: after 3 days version B has a conversion rate of 5.2% and version A 3.8%. That looks convincing — but with only 200 visitors per variant, the result is statistically meaningless. If you switch to version B now, the long-term result may be worse than the original.
If you change headline, image, button colour, and layout at the same time, that's not an A/B test — it's a redesign. You can't know which change made the difference. Always test one element per test.
The other extreme: testing whether a button in green vs. dark green converts better. On most websites the difference is so small you'd need months for a statistically significant result. Test big, meaningful changes — different headlines, different page structures, different audience messaging.
Our brains love confirmation. When version B is leading after 3 days, it's tempting to stop the test and crown the "winner." Wait until statistical significance is reached — even if it requires patience.
A good A/B test starts with a clear hypothesis: "If we change the headline from feature-oriented to benefit-oriented, conversion rate will lift by at least 15%, because visitors recognise the personal benefit faster." Without a hypothesis you test blindly — and learn nothing from the results.
Every test generates knowledge about your audience. Document what you tested, why, what the result was, and what you concluded. That knowledge is more valuable long-term than any single test win.
You don't need expensive enterprise software to get started. Here are the best options by budget:
A common problem for local businesses: "We only get 500 visitors a month — can we even run A/B tests?" The honest answer: classic A/B tests are difficult at very low traffic. But there are alternatives:
A tax advisor with a landing page for "Have your tax return prepared" tests the headline:
Version B converts 34% better. Why? It communicates a concrete, quantifiable benefit (€1,200 refund) and names the price, removing uncertainty. Version A is generic and interchangeable.
From this single test we learn: this audience responds to concrete numbers and pricing transparency. That insight feeds into all future tests and optimisations.
A/B testing takes the guessing out of conversion optimisation. Instead of debating whether the headline should be green or blue, you let your visitors decide. The key is discipline: formulate clear hypotheses, test one variable at a time, wait for statistical significance, and document results.
Start with the element that has the biggest impact — usually the main headline or the call-to-action. A single well-run test can lift your conversion rate sustainably and pay off for months and years.
We help you identify and run the right tests — for measurably better results.
Book a free consultation →