Usability testing, commonly called user testing, is a tool to check how useful your website or product is and measure how easy it is to use. Usability testing helps teams spot problems early on so you can continue to make enhancements. It helps teams identify functional and organizational issues that you can't spot on your own.
With the pressures of impending deadlines and launch dates, one might be tempted to skip usability testing altogether. Skipping this step will inevitably cost more in the long run. Having to go through development a second time is a considerable expense. The time investment running a usability test before designs go into development will help identify problems early on and allow designers to make changes ahead of development. Usability testing saves time and money, gives the team a deeper understanding of user needs, and improves everyone's overall experience.
It’s a good practice to integrate remote usability testing within your design and development workflow. Remote testing allows for the fast turn around needed to keep on track with agile work sprints.
- A deeper understanding of user needs and goals
- Confirmation a feature is working as intended
- Increased customer satisfaction
- Increased growth & profitability
- Keeps development costs low
- Helps to identify future enhancements
6 Steps of Remote Usability Testing
- Step 1: Setting Goals
- Step 2: Finding the Right Participants
- Step 3: Preparing for the Test
- Step 4: Conducting the Test
- Step 5: Analyzing the Results
- Step 6: Writing the Report
Before running a test and you begin going through the steps, you need to make sure you have the right conditions for conducting a test.
- Will we have access to real users of this site/product?
- Are we able to get immediate user feedback on the things we're testing? Or are they features or functions that you can't observe right away, on the spot? I.e., patterns like personalization, notifications, etc.
If the answer is yes, we have access to real users, and yes, we're looking to test are things we can get immediate feedback on, then let's get testing!
Step 1: Setting Goals
Having a realistic vision of the overall business goals will help lead to increased customer satisfaction. Usability testing may confirm or reveal new business goals. You will learn more about your users, their expectations, and their goals.
- What is the business/usability goal? Example: To find podcast content.
- Why are we testing this? Example: Users have been unaware of podcast content in the past.
- Who is the audience of the test? Examples: Marketing managers, Developers, UX Professionals.
- What are the key parts of the test? Examples: How easy was it to find podcast content, how fast did they find it, and how pleasant was the experience?
- Where is the product in terms of the development cycle? Examples: After the initial launch, the first sprint.
- How much time will the session take? (1 hour or less)
- How much will the participant be compensated?
- How many participants do we want to test (5 participants reveals 80% of problems)
Step 2: Finding the Right Participants
It's important to include real users of your product or service. Testing with real users will make sure you're receiving the most accurate feedback.
- Who is our target audience? Primary & secondary users? Primary users are those who have direct access to the system. Secondary users generally use the system less than the primary users but are still affected by it.
- User type: Beginner, Intermediate, Expert. Depending on the users’ level of experience using the product or service they are either beginner, intermediate or expert users.
- Role: Content creator, developer, etc.
- Device type: Desktop, tablet, mobile?
Where to look:
- Social media call for participants
- Contacting users directly via email\
- Sharing a survey to see which users may be interested in participating (ie: survey monkey, Typeform, google forms)
- Use a paid service to recruit users such as Ethnio
Calendly is handy and allows a video conference link to be included in the calendar invite once it’s scheduled.
✓Use this recruitment email template to send to potential participants.
Step 3: Preparing for the Test
Taking the time to prepare the tasks and scenarios that make the most sense for your participants will provide the most actionable results.
Determine which device you’ll use (laptop or desktop computer) and what recording tool you’ll be using (i.e., Zoom). If using Zoom, make sure the user has access to a desktop or laptop computer. Make sure the computer has a microphone. Ask the user to download Zoom ahead of time and enable Zoom to allow screen sharing under system preferences, security, and privacy. If screen sharing for the first time, users will need to update their preferences and restart Zoom during the call.
If there’s a need to conduct a test on a smartphone or tablet consider using a service such as Lookback.io.
Other paid services to consider:
A typical study usually consists of three parts:
Part One: Pre-session questions
- What do they do?
- What are their expectations with the feature?
- How do they expect it to work?
- How is their current experience? What are their pain/pleasure points?
- What are the most important things they would like to do with this feature?
Part Two: Tasks
- Have the participant conduct no more than five tasks.
- Introduce each task as a user scenario. Example: “You’re looking for some new ways to cook beans. Find a way to save bean recipes to read later.” A scenario provides some context and makes the task more natural for the user. The more naturally participants perform the task, the better the data you will get as a result.
- Record the user conducting a series of tasks.
- Make sure the order of the tasks makes sense.
- Ask the user how they feel about doing the task.
Part Three: Post-session questions
- Ask the user about their overall experience with the feature.
- What do you most remember about this feature?
- Have you seen other websites that have a similar tool?
- How would you describe it in one or two words?
✓Use this testing script template to prepare for the test.
Step 4: Conducting the Test
Set the stage for your users ahead of conducting the test. Assure them of the amount of time it will take to complete the test. Let them know there are no right or wrong answers.
- Make sure to arrive at the video conference early for any initial prep and set up.
- If using Zoom, make sure the video function is turned off. Seeing facial expressions could distract from the tasks at hand.
- Consistency is important. For each user, ask the same questions, conduct the same tasks in the same order, using the same words.
- Manage user expectations when you first greet them. Let them know how long it will take. Tell them you’d appreciate them “Thinking aloud” and be candid about their comments, good and bad.
- Let them know we’re trying to evaluate the feature or product, not them. There sometimes multiple ways to get to the answer. Please find your own way.
- Remain neutral – you are there to listen and watch. If the participant asks a question, reply with “What do you think?” or “I am interested in what you would do.”
Tips at moderating a usability test:
- Respect the participant’s rights
- Ensure their physical & emotional comfort
- Minimize interruptions
- Be unbiased
- Watch out for signs of extreme frustration and make a note of them
- Listen to them. You should only talk when needed
- You can take notes during the test or review the recording and take notes later. Taking 5-10 minutes after a test can be an effective way to note takeaways and focus on notes.
- Note everything that the participant is doing, where they go, what they say. Was the user successful or not successful? Examples: wrong pathways, confusing page layout, navigation issues, confusing terminology.
- Pull quotes & timestamps for relevant things.
- Refrain from judging what an important issue is and what is not. We don’t want to add any note takers’ bias.
Step 5: Analyzing the Results
Analyzing the final results of your usability test will confirm features or services are performing as planned. It will also reveal action items where there is room for improvement and where design revisions are needed.
- Browse through notes and categorize the observations into positives, issues, and general observations by the participant.
- Look for patterns between the participants. Did any observations occur more than once between participants?
- Categorize issues in terms of a severity scale.
A scale we have used:
- Critical issue: Usability catastrophe; imperative to fix this before release. Examples: An issue that was persistent across all users. Overlooking a primary CTA. Broken navigation links.
- Major issue: Major usability problem; important to fix, high priority. Examples: Navigation hard to use. Confusing terminology.
- Minor issue: Minor usability problem; low priority. Example: Inconsistent use of labels.
- Normal issue: Cosmetic problem only; fixed if extra time is available. Example: Not liking a particular color.
Step 6: Writing the Report
Sharing a final report with the team and stakeholders allows everyone to have a clear understanding of how well the product or service performed and what are the recommended next steps.
Write a report outlining the following:
- Testing method
- Participant demographics (persona & audience)
- Executive summary (a few paragraphs which briefly explain the high-level findings)
- Detailed findings
- Next steps (if necessary)
- Screenshots, quotes, and videos whenever possible
✓Use this report template to prepare the report.
We hope you'll find these checklists and templates helpful. We pulled this list together for our own learning and to share with stakeholders. We would love to hear from you on ways to improve on this list. Please leave comments or tweet suggestions.
Further Reading & References
- Usability 101: Introduction to Usability
- Usability Testing
- 10 Usability Heuristics for User Interface Design
- Conducting a usability test
- Usability.gov templates
- 18F Methods