UX DesignNEW InVision Studio

A Guide to the Different Types of Usability Testing

Learn about the different types of usability testing — or read this article and more in our ebook: A Beginner’s Guide to Designing UX. Read offline, any time, on any device 🎉

Buy $10

Quick summary

Improved usability = better reviews, increased sales, fewer uninstalls, more active users, and so on. Enter usability testing!

What you'll learn

  • Why usability testing
  • Usability testing vs. user testing
  • Card sorting, tree testing, etc
  • How to conduct a usability test

Usability Testing vs. User Testing

With user research and decent knowledge of UI/UX design, we can design interfaces and experiences that are easy to use. But…how do we obtain full confidence that our design meets user expectations? Enter usability testing (not to be confused with user testing, btw). In this awesome article, Vipul Mishra sums up the difference perfectly:

  • User testing: do users need my app?
  • Usability testing: can users use my app?

We can apply user testing to find out whether or not our idea has customers—this happens as soon as the idea arises. Usability testing, on the other hand, is about contextual task completion, and happens once we have an MVP (Minimum Viable Product) ready for testing. As an example, if the primary use of our app is to book a holiday home, usability testing is used to assess how well users are able to do that.

What’s the Value of Usability Testing?

Increased sales, better reviews, fewer uninstalls, more active users, and so on.

Businesses that spend 10% of their budget on improving usability see, on average, a 135% increase in their desired metrics, making the value of usability testing very clear.

Let’s take a look at the different types of usability testing.

Different types of usability testing

Tree Test

A tree test is a usability test used by UX researchers to decipher where users would navigate to in order to find something (a sample question could be: “Where would you go to find our return policy?”). This usability test determines whether or not the navigational structure (and the navigation labels themselves) make sense, and is measured by the number of correct answers, the time taken for the tester to reach their conclusion, and the level of confidence in their answer. Conducting tree tests before card sorting means we have a benchmark that we can use to measure improvement, assuming that we also conduct another tree test later, to validate any improvements.

To carry out the test, the user is given a clickable navigation.

Recommended tool? Try Treejack by Optimal Sort.

Optimal Workshop’s tree testing tool, Treejack

Open Card Sort Test

During open cart sort tests, testers are asked to sort navigation items into the appropriate categories. These categories are named by the tester, and offer initial insights into the user’s mental model where navigation and user flows are concerned.

Card sorting can be conducted using tools like Optimal Sort, or in a real environment.

Open card sort test

Closed Card Sort Test

Closed card sort tests are a summative variation of open card sorting, where the categories are already named, and it’s simply a case of the tester deciding where each navigation item belongs. Closed card sorting is used to confirm any insights regarding the hierarchy of navigation, and we can then follow up with summative tree testing.

Performance Test

Performance tests evaluate the usability of task completion. Given a specific scenario, the tester is asked to complete a task as we look and listen in, where they’re then assigned a score (2 for “completed”, 1 for “had difficulty”, and 0 for “fail”). While this is happening, we identify any roadblocks they encountered (especially in the formative stage), paying special attention to how they overcome these roadblocks (if they do).

Performance testing in the summative stage is conducted with high-fidelity mockups—at this stage there should be no (or few) roadblocks as we measure success rate, time to completion, and overall user satisfaction to confirm that usability has improved since formative testing. Many usability tests are meant to be conducted more than once.

Expectancy Test

An expectancy test is an interesting one, as the tester can answer with some rather hilarious responses 😂 if the usability is bad enough (and at times, it undoubtably will be). The approach here is to ask the usability tester what they think something means (or does) without interacting with it. This is done to gain insights into the initial mental model of the user, so that we can reduce the likeliness of accidental human error.

Eye-Tracking Test

Eye-tracking tests are usually conducted on high-fidelity mockups, so that we can see what actually attracts the user’s attention, for how long, and in which order did they focus their attention on certain elements. Eye-tracking studies are useful for analyzing task completion, to see if they’re able to complete it, or if not, what they did otherwise.

Eye-tracking tests can be conducted with heatmaps (which track where the user moves their mouse), scrollmaps (which track where the user scrolls to, and how long they stopped to look around), and clickmaps (which track where they actually clicked).

With all of these maps, there are common traps to watch out for.

Fullstory, Hotjar, and Crazy Egg are three tools for conducting eye-tracking tests.

In the example below, our heatmap appears to show that readers are mistakenly believing that our background header animates when hovered, and when users are only willing to allocate a certain amount of time to our apps and websites, distracting users in this way is bad for business. Point is, we wouldn’t have come to this conclusion without conducting eye-tracking tests (for which we use Crazy Egg’s heatmaps).

Eye-tracking test using Crazy Egg’s heatmap tools

5-Second Usability Test

As writers, this is our favorite type of usability testing. The 5-second usability test is used on individual webpages to assess what the user remembers about the webpage after 5 seconds, and whether or not they were able to complete the scenario task that we set for them. When we consider how impatient users are, it’s easy to see how this test is extremely invaluable. Apart from using the 5-second usability test to assess ease-of-use, we also use it to see if our users understand what our articles are about!

Free Exploration Test

A free exploration test is a summative usability test where the tester has 5 minutes to explore the app or website freely, speaking aloud as they do. From this we can identify any flaws (usability, or otherwise) not already identified. This test is well-known, often conducted with remote usability testing tools such as Lookback and UserTesting, which conveniently integrate with design tools Marvel App and InVision App respectively 👌.

Also, new kid UserLook helps designers kickstart usability testing in seconds.

Remote usability testing with Lookback

Functional Salience Test

A functional salience test is used to decipher which functions are the most important. Instruct the tester to choose three functions from a list (e.g. sign up, about us, and pricing could be the three chosen choices from a list of things our app or website does).

We can use these insights to help us design visual hierarchy.

Visual Affordance Test

During a visual affordance test, users are first asked to circle the elements they believe to be clickable (and then again, with elements they don’t believe to be clickable). With these insights we can fine-tune the clarity and clickability of our interactive elements.

Brand Perception Test

With a brand perception test, what’s being measured is the brand message. While this sort of blurs the lines between usability and marketing, branding does influence our decision to interact with a business further, especially when trust is concerned. This test is used to identify which feelings, from a set of visual mockups, are aroused from an app or website. Users are asked to circle adjectives that best describe the brand—from this we can decipher if users are associating the brand with the desired attributes.

BTW: Focus on the Target Audience

Naturally, we need to conduct usability tests with our target audience. These are the users that actually need our app or website, and are best to assess whether or not it helps them achieve their objective. Assuming that we conducted user testing during our initial user research, we would already have these test subjects at hand. For existing products, these test subjects are usually our existing users and customers.

For best results, use a large sample size and a variety of test subjects.

Usability Testing Best Practices

Here are our top tips for conducting usability tests:

  • Start testing early on (learn sooner, fix issues when they’re less expensive)
  • Implement lean usability testing (test with every iteration)
  • Start with low-fidelity mockups (and add fidelity as ideas become validated)
  • Plan where you’ll conduct the tests (or which usability testing tools you’ll use)
  • Clearly define the scope of each test (don’t aim to fix everything in a single test)
  • Write a script to help things run smoothly and maintain consistency
  • Carefully explain to the tester why we’re conducting these usability tests
  • Assure the tester that we’re testing the design’s usability, not their ability
  • Don’t lead the tester to the answer you want to hear (listen, ask questions)
  • Record results and insights in a spreadsheet, keep an eye out for patterns

We found this article by Smashing Magazine to be a terrific resource for understanding more about the how’s and why’s of usability testing. Definitely worth a read 💯.

If you’d like to read this article (and more) in ebook format, download A Beginner’s Guide to Designing UX to learn more about full-stack user experience design 💪.

Daniel Schwarz, author

Daniel Schwarz

Daniel Schwarz is a digital designer, web developer, and maker by background, but a writer, editor, 3x author, and teacher at heart. Currently a design blog editor at Toptal and SitePoint, writer at .net Magazine and Web Designer Magazine, but occasionally a collaborator with top design companies such as Adobe and InVision.