A/B testing allows to compare variations from the same website to determine what will generate a better result. For this, the parameters used are micro and macro conversions.

Because they require little or no involvement from developers, A / B testing has become one of the most commonly used tools, as they practically do not involve the use of other technological resources. Also, thanks to their low cost, they have a strong presence in sales and are increasingly required by designers of user experiences.

Garbage In, Garbage Out

A/B Testing are a powerful tool when used in an adequate way. However, there are some potential problems that may rise:

#1. You can not judge the concept from the variation, since an erroneous design can negatively influence the conversions; that is clear, however, each design is the implementation of a concept and it would be a mistake to judge the merit of a concept through a single design. It is often necessary to try several designs to adequately represent a concept.

#2. Making a variation in the design will never represent the real solution to the problems we may find. Making incorrect assumptions and generating a new design variant is not the correct way to approach the problem itself.

#3. With the A/B testing is only possible to find the best option between the available variations. If the variations are based only on conjectures of internal experience and opinion, who can say that the test includes the most optimal design?

These flaws of experimentation can be mitigated by adding to the A / B test reports, the user’s research. When doing at least a little research, valuable clues are earned as to the possible reasons for conversion problems.

To make sure the A/B testing is being well executed, the following items should be defined:

  • What is the conversion problem?
  • What is the conclusion about the causes of the problem?
  • What do you propose to modify?
  • Which results are expected to achieve from the change?

Four investigations of user experience to improve the optimization tests.

Define the user’s intent and objections

This is vital to understand why people are successfully visiting a virtual environment, and why they decide to leave. If one assumes a theory incorrectly as to why people visit the site, the hypothesis of variation will not reflect the actual perception of users. It is dangerous to asume objections of the user without having done any research. For example, supposing that visitors do not perform desirable actions because prices are too high, then the decision is made to reduce prices and as a result the profit margin is hit. Instead, the reason why conversions were not given could have been that users actually did not understand the needs that covered their service.

Expose interface flaws

If important usability problems such as confusing interaction flows are ignored, you will not be able to see the results of the A / B test settings as their design variations are not addressed to the problem in question. For example: If there is a form in which several of the fields ask for information and it turns out that users do not feel comfortable providing such data, running an A / B test to know if changing the “send” button increases the conversion rate , will end up being a futile effort. Understanding the true motives of low conversion is critical to successful testing.

How to expose the flaws of the interface: with usability tests that can be carried out quickly; testing only 5 users can detect up to 85% of massive site / interface failures.

Findability

Findability refers to the problems in locating the elements. It can be checked before excecuting the A/B tests, as they impact directly on the information architecture and navigation of the site / interface.

How to measure “findability”?:

Tests can be done without affecting design, with a proposal of information architecture.

Users can say if tags, link groups, hierarchy or naming are user-friendly or not. If you are doubting whether name of sections, pages, links or tags in your site are okey, this test can identify which are the problems and help defining new tags that improve the way of locating the elements so that they are easily located and understood by the user.

Clear design variations before posting

Minutes before the test, there is little that can be revealed regarding some big mistake in the design of the interface.

While more advanced forms of user research have their advantages, do not overlook a “home” cleaning study. For A / B tests, you should make sure that all design variations have a reasonable chance without being tarnished by usability issues.

Combine methods to maximize the conversions:

A/B tests are a wonderful tool, but unfortunately they can be misused. If A / B tests are used instead of the research, the variations will be essentially conjectures. A / B test results can be improved by incorporating UX research to improve cause identification, develop more realistic hypotheses and identify more opportunities for experimentation.

Source Jennifer Cardello for Nielsen Norman Group; Adapted by Celerative.