When you run tests on your web site, do you calculate the statistical validity of your tests?
Whoa you say, I not sure what you just said. It really isn’t that complicated, just a few big words, but very important words.
According to the 2011 Marketing Sherpa Landing Page Optimization Benchmark Report, 40% of the over 2,000 marketers surveyed did not calculate the statistical significance of A/B and multivariate test results in 2010. 40%! That’s a big chunk of marketers.
That means that 40% of the marketers surveyed didn’t really know if the tests that they just ran are giving them true results or are just random occurrences.
But how can you tell when there might be problems with your numbers? Look out for these 4 types of validity threats:
Too small a sample size
To find a winner, test your layout and copy variations with enough test subjects to reach a high level of confidence in your results. But how many is enough? Several factors impact the sample size you’ll need including:
- The current conversion rate of the page you are testing (note: not the same as the conversion rate of your entire site)
- The average number of daily visits to the test page
- The number of versions you’re testing
- The percentage of visitors in the experiment (sometimes you want to test with just a segment of your traffic)
- The percentage improvement you expect over the control
- How confident you need to be in the results (usually 95% but could be higher if the risks of being wrong are high)
You will need to set up in advance what you will consider significant [Read more...]