What should I test next?

Robert Jones
By Robert JonesInsights Analyst
5 minutes to read

Testing changes to a website, article, app, or campaign can yield impressive results with some case studies showing significant increases in conversions.

But with so many variables that you can play with and tweak, how do you know where to begin or what to test next?  

In this article, we’ll explore the process of working through and understanding website insights to create tests. First, let’s have a look at some definitions.

What is AB and multivariate testing?

AB and multivariate testing (MVT) can be quantified to figure out which new elements or website changes perform the best against a control version. There are nuanced differences between the two.

AB testing tests a single website change against the original site feature. For example, does the copy on the new version A perform better against the control (original) version B?

AB testing should always test a new implementation against what’s already there, as you aim to make incremental changes.

Example of AB testing:

A/B testing tests one change against the original feature

 

Multivariate (MVT) testing is where multiple changes are tested at the same time. Testing platform Optimizely describes MVT as comparing “a higher number of variables” which “reveals more information about how these variables interact with one another”.

For example, if we want to encourage newsletter sign ups and want to know which button colour has the biggest impact. With MVT we can test three different colours until we have enough data for each version and can clearly see which performs best. As with AB testing, MVT requires one control option where the others can be ideas sourced from the team or client.

Example of multivariate testing:

MVT testing tests multiple different changes at the same time

In both cases, tests continue until there’s a significant difference based on the volume of traffic; the winner is then revealed.

There are many small and big changes you can make when it comes to AB and MVT testing. For example, you could change call-to-action (CTA) copy to see if it increases conversions, change sections of body copy, add or remove visuals, or change the layout of landing pages.

The possibilities are endless and it’s best to have a plan making it clear which AB or MVT tests to run each quarter. At the end of each quarter review the performance of the previous tests as this may impact your plan for next quarter.

When choosing your preferred approach, consider your aims and objectives. Don’t create variants for the sake of creating variants just so you can MVT test, if you want to make one change at a time, use AB testing.

However, if you’re looking for a deeper understanding beyond the metrics, you should consider user testing.

What is user testing?

Where AB and multivariate testing give you the ‘what’, user testing gives you the ‘why’.

User testing is a research method which involves conducting a structured research session with a relevant participant.

There are several different ways you can carry out user testing, including:

  • Moderated testing – Users test a website or app in a research facility, the moderator asks questions as they take the user through the test script.
  • Moderated remote testing – Testing takes place via webcam, Skype or another remote software with a tester and a moderator, who can dig deeper into what’s going on during the user experience.
  • Unmoderated remote testing – This type of testing is conducted by users at home without a moderator, they are given tasks to work through while their screen is recorded. There is no moderator to intervene or dig deeper.

Sessions can also include observers such as user experience (UX) designers or other researchers who may benefit from seeing the test first hand. It also helps to record sessions so you always have a point of reference, as the insights are used to recommend changes to the website.

The questions you ask throughout user testing should be based around your objectives. So take this into consideration when you’re writing your participant screening questionnaire.

There are other methods of user research including diary studies and ethnography but moderated in-person, remote moderated and unmoderated remote are common yet useful forms of testing.

Definitions over, let’s get into the recommendations on how to decide what to test.

How to decide what to test next

Start with analytics

The best way to figure out what to test next is to first look at Google Analytics. Then ask the following questions:

  1. How does behaviour change?
  2. Is conversion poor or has it dropped on certain pages?
  3. What does the customer journey appear to look like from the data?
  4. Are there any pages with a high bounce rate?

Let’s use a scenario. Pages with high bounce rates could indicate that the content may not be relevant or it’s too complicated, which causes high numbers of users to leave the site. The data can help you create a picture of how users move through the site and the drop-off points throughout that journey, for example this may look like:

Landing page > 30% continue to > Product page > 15% continue to > Contact us form > 1% complete contact form

User recordings and usage surveys

Hotjar provides video recordings showing you what users do on your site. It shows you how long they spend on the pages, heatmaps of activity, and clicks as they complete their actions. However the videos don’t have audio so it doesn’t give you the reasoning behind what users are doing or why they didn’t convert.

You can also ask questions using web intercept surveys, which are typically quite short. Survey responses combined with video insights begin to help you see common behaviours and issues and create test hypotheses. If you’ve already implemented tests, you can add a Net Promoter Score (NPS) to your survey to track how favourably you’re perceived over time.

Your web intercept survey could ask:

[SINGLE CHOICE]

What were you trying to achieve today?

[insert options from your website such as…]

  • Get a quote
  • Find pricing information
  • View product tour
  • Other – please specify

[SINGLE CHOICE]

Do you agree or disagree with the statement below?

I was able to achieve what I wanted to achieve during my visit today. 

  • Strongly disagree
  • Slightly disagree
  • Neither agree nor disagree
  • Slightly agree
  • Strongly agree

 

[OPEN END]

Please tell us why you gave that answer

 

[OPEN END]

What would you improve on this website/ page?

[SINGLE CHOICE – NPS Score]

Based on the website, would you recommend this brand to friends and family?

  • (would not recommend at all) to 10 (definitely would recommend)

An example of a web intercept survey

Exit survey tools

In a similar vein to intercept surveys, exit intent surveys pop up as visitors are about to leave your site and can help you understand why someone is leaving.

You can also use exit surveys to ask if users were able to achieve what they wanted to achieve and if not, why not. These results can be displayed on a page-by-page basis so you can really drill down into the data.

Exit intent surveys can help you gauge if your changes are well received, but can also give you a starting point for identifying pages to run tests on, as well as what tests to run.

The questions that you ask should again be rooted in your objectives as the end goal is to increase goal conversions.

 

1. Previous research

Like analytics, previous research can also provide you with a wealth of inspiration. Have you conducted AB tests which could provide some insight in to how you can increase conversions with further tests? Are there results from user testing, persona research, customer satisfaction or any other types of research you’ve done which can help you decide what to test on the site?

For example, customer satisfaction surveys may have revealed that customers struggled to find returns policies for your products. This may put customers off purchasing. User testing can reveal the process behind finding the returns policy and therefore how difficult it is to find. This gives you the opportunity to redesign the returns policy information as well as where it is located on the site. Another way of finding feedback is to collect feedback from customer service emails from customers.

If you don’t have any previous research, it’s advantageous to start investing in research: opinions from relevant customers is extremely valuable. This can be done through satisfaction surveys, emails, and interviews – essentially any way you can possibly elicit opinions.

2. Review previous user testing

Another great way of deciding on your next testing plan is to look at your previous findings from user testing or other user research. Have you since made improvements based on feedback or testing?

Constantly building on findings and changes by conducting additional research can help reaffirm decisions and help to make decisions on future changes. Making decisions without insight is akin to stumbling around in the dark.

3. Accessibility

Accessibility has always been important but it’s been recently making headlines. Beyoncé has been sued after a disabled user claimed her website wasn’t accessible because there wasn’t any alt-text attached to images for a screen reader to read.

Domino’s US app had a flaw which meant visually impaired users couldn’t use voice recognition to add toppings through the pizza creator section. Again, this was down to the fact a screen reader couldn’t do its job and the pizza chain was court ordered to make its app accessible.

It’s wise to start off with an accessibility audit to identify areas of improvement. Conduct user research with participants that have accessibility needs such as visual impairments to understand how they use your site or app alongside their screen readers or other software.

It’s really important that you consider accessibility, no user should have to work harder than another user to achieve the same goal. You don’t want to fall foul of disability discrimination laws so should consider this aspect of user testing as absolutely essential.

4. UX principles

There are psychology principles and UX methods which exist to make standards and experiences better. How does your website stand up against these? There’s only one way to find out, conduct a UX review.

At its core, a UX review should reveal that users have freedom and control as well as reinforcing that the website is flexible and efficient when used. 

Further examples of UX principles include:

User control and freedom

Error prevention

Principles fit neatly alongside best practices – if you’re following best practices then UX principles should come hand in hand. Best practices vary between page types but for a landing page as an example, best practice may include short, relevant headlines, keep important information above the fold and use of contrasting colours.

5. Test every major change

Has your company changed its services, products, or policies recently? If so, your website will likely need to adapt to this change.

The best thing to do before putting something live (or even before designing it in the first place) is to make sure it meets it’s capable of meeting objectives by testing it with users. This is regardless of whatever the change may be, and should always be a forefront of planning changes.

 

Let the data dictate the tests

Take your analytics insights, Hotjar insights, previous research, and any other useful information to plan a series of AB and MVT tests, as well as focus areas for user testing.

Remember to consider:

  1. Which AB or MVT tests to schedule for the next quarter
  2. User testing objectives: do you have pages you’d like specific feedback on, would you like to test journey changes or new sections of the site? Perhaps there are some landing pages you’d like feedback on before they go live?
  3. Keep reviewing the output of feedback tools as they can help you understand user behaviour on the site as you adapt it, while providing focus areas for user testing.

You shouldn’t conduct tests without the data to back up what you’re doing. This will mean that the conclusions you arrive at won’t be as impactful as you hoped, and in some instances could even be damaging.

So now you know what to use to inform your testing schedule, it’s time to actually go out and start testing.

Still unsure what to test?

We can carry out a UX review to help identify areas for improvement, or do some user testing to give you insight into what your users think of your website.

Get in touch with if you’d to know more about how we can help you gather insights.

Get in touchArtboard 1
comments powered by Disqus

Articles by Robert Jones

Robert has over 9 years experience in market research, user testing and user research. He has worked for digital marketing advice publisher Smart Insights creating training guides on personas, UX and user research and has written over 100 blog posts for them.