An A to Z of Optimisation
Chris Wallis explains some of the commonly used (and misunderstood) terms in Conversion Rate Optimisation.
Once you’ve identified a weakness on your site through qual and/or quant analysis, the next step is to improve upon it. The best measurable way to do so is by A/B (split) testing.
A/B testing involves comparing the performance of two versions of a chosen element against a conversion metric. A is the original version, or control, and B is the challenger, or variant.
Testing tools such as Oracle Maxymiser, Optimizely and Adobe Target provide the ability to randomly assign either the control or challenger to visitors to your site. Data is then collected against the test metrics based on the visitor’s behaviour, and the results of both variations can be compared and contrasted to determine which was the stronger performer, and for which visitor segments.
Many aspects of a website can be A/B tested. Single elements like copy or individual CTAs can be tested, as can complex ideas, such as a complete page redesign or an altered checkout funnel.
An A/B/n test is identical to an A/B test in every way except one: more than just two experiences are tested.
The upside to A/B/n testing is that it allows the comparison of multiple variations all at once, eliminating the need to iterate individual variants with A/B tests.
The only potential pitfall is that more variants means less website traffic to each. If you have 3 variants to test against the control experience, then that requires an average of 25% of traffic to be bucketed in to each experience. If the traffic levels to your test page(s) are low, reaching statistical significance will take an unreasonably long time.
The number of times a metric was triggered over the course of a campaign.
Single action count is the total number of visitors that completed a specific action.
Multiple action count is the total number of times an action was tracked. For example, a visitor could ‘Add to Bag’ 3 times during a test.
The unaltered version of an element which is currently served to all visitors to the site.
The percentage of visitors (or sessions) that complete a given action.
A Call to Action is a prompt for a user to complete a certain action. ‘Submit’ buttons, for example, are often referred to as CTAs.
An area or journey on site that is targeted for testing. A/B/n tests consist of one element, while Multivariate tests consist of 2 or more elements.
Each element can have multiple variants.
A complete page or journey configuration that is presented to random visitors within a test.
For A/B/n tests, an experience is exactly the same as a variant or the control.
For MVTs, an experience is a combination of variants within different elements. For example if a pink CTA is the default within a colour element, and the copy ‘Add Now’ is a variant within a copy element, then the experience is a pink ‘Add Now’ CTA.
Multiple experiences in an MVT can have default elements, as all combinations are considered, but only one experience is considered the true default: where no variants are part of the experience at all. This experience is the true default, or more generally just referred to as the control.
Full Factorial MVT
A full factorial MVT runs every possible experience in the MVT as part of a test. Most standard MVTs are full factorial.
The number of visitors that fulfil the criteria to enter a test. If the test generation criterion is the arrival on a certain page after having logged in to an account, then a generation is tracked once that action is completed.
An event trigger used to measure test performance. Multiple metrics are created with relevancy to the test being undertaken, such as click metrics and page view metrics.
Multivariate Test (MVT)
Multivariate tests are used to evaluate multiple testing elements simultaneously and in combination. This is beneficial when multiple areas on a page or journey have been selected for testing.
They’re best used when all variants in one element can be applied without affecting the feasibility of the variants in the other element; it gets messy otherwise.
The most basic example of an MVT is button copy and colour. Clearly, changing a button from magenta to purple will have no effect on whether the button says ‘Add to Cart’ or ‘Buy Now’; both copies can be tested with both colours. We define the first element as button colour, and the second as button copy. The resulting table of variants would look like this:
The total number of experiences in an MVT is the product of the number of variants in each element. For example, we could add a third element with one variant to the MVT above, bringing our total number of experiences to (2x2x2) 8.
There is no limit to the number of elements that can be used in an MVT, other than traffic levels; is 8 experiences feasible? Only of test traffic will see each one.
MVTs allows for evaluation of not only the performance of individual changes, but also how well they work together. This can be more efficient than iterating multiple A/B tests in sequence, but requires enough traffic to support the running of the test.
Partial Factorial MVT
A partial factorial MVT is one where some experiences from the original MVT are excluded from the test audience. This is usually because certain combinations of variants may not be desirable or can otherwise be reasonably sacrificed if traffic levels are a concern.
There is no difference in the number of experiences that can be tested between full- and partial- factorial MVTs, but partial factorials stop short of showing some experiences to the test audience.
A subset of the test population that share a common characteristic, such as location, new or returning visitors, and age.
A series of page visits or other events occurring within a short period of time. After 30 minutes of inactivity (even if you close and reopen your browser), any subsequent activity on a site is attributed to a separate session.
A unique user can therefore have multiple sessions.
The percentage increase (or decrease!) in conversion rate of the winning (or losing!) experience against the control.
Uplift is calculated as the difference in conversion rate divided by the control conversion rate.
For example, if the control had a 4% CR and the variant 5%, then the uplift is ( ) 25%.
An individual that is recognised as unique, and can therefore have multiple actions and sessions attributed to them.
This is usually device specific, but some tools like GA allow for retrospectively assigning data to a single visitor across multiple devices. This does however request the user to log in to an existing account on that device at least once.
A reimagined version of an element to be tested against the default.