Validation

See if you're on the right track.

Dot voting

What

A simple voting exercise to identify a group’s collective priorities.

Why

To reach a consensus on priorities of subjective, qualitative data with a group of people. This is especially helpful with larger groups of stakeholders and groups with high risk of disagreement.

Time required

15 minutes

How to do it

  1. Bring plenty of sticky notes and colored stickers to the meeting.
  2. Gather everyone on the product team and anyone with a stake in the product.
  3. Quickly review the project’s goals and the conclusions of any prior user research.
  4. Ask team members to take five minutes to write important features or user needs on sticky notes. (One feature per sticky note.)
  5. After five minutes, ask participants to put their stickies on a board. If there are many sticky notes, ask participants to put their features next to similar ones. Remove exact duplicates.
  6. Give participants three to five colored stickers and instruct them to place their stickers on features they feel are most important to meeting the project’s goals and user needs.
  7. Identify the features with the largest number of stickers (votes).

Considerations for use in government

No PRA implications: dot voting falls under “direct observation”, which is explicitly exempt from the PRA, 5 CFR 1320(h)3. See the methods for Recruiting and Privacy for more tips on taking input from the public.

Tree testing

What

A test to see how well people can navigate to the intended location of a given website structure. Conduct tree testing after card sorting to increase confidence in navigation content.

Why

To validate how well users understand the labels, categories, and organization of a proposed website structure, and learn where they may get lost while trying to complete a task.

Templates

Time required

30 minutes to 1 hour per test

How to do it

  1. Build the tree.
    • Use a tool like Optimal Workshop or a prototyping tool to create a site hierarchy, including all the categories and subcategories you want to test.
    • Provide options for each subcategory to prompt realistic behavior from participants who often evaluate link labels by comparing them with alternatives.
  2. Define the tasks.
    • Ask participants to find or do something within the site, focusing on key website goals and user tasks, as well as potential problem areas. Set at least one correct destination for every task.
    • Write tasks that reflect how people might naturally approach the site. Use plain language and avoid using tree labels in the tasks.
    • Limit the activity to 10 tasks to avoid low completion rates or participants getting too familiar with the tree.
  3. Run the test.
    • Send participants the link to the test or watch them do it in person.
    • If possible, ask follow up questions to get more context that can otherwise be hard to spot in the quantitative data.
  4. Analyze the results.
    • Optimal Workshop includes useful data for further interpretation, such as how many people completed each task successfully, how directly they found the destination, and how long it took them.

Example from Optimal Workshop

Additional resources

Considerations for use in government

This method works best with around 50 people, which has PRA implications. See the methods for Recruiting and Privacy for more tips on taking input from the public.

Usability testing

What

Observing users as they attempt to use a product or service while thinking out loud.

Why

To better understand how intuitive the team’s design is, and how adaptable it is to meeting user needs.

Templates

Time required

30 minutes to 1 hour per test

How to do it

  1. Pick what you’ll test. Choose something, such as a sketch, prototype, or even a “competitor’s product” that might help users accomplish their goals.
  2. Plan the test. Schedule a research-planning meeting and invite anyone who has an interest in what you’d like to test (using your discretion, of course). Align the group on the scenarios the test will center around, which users should participate (and how you’ll recruit them), and which members of your team will moderate and observe. Prepare a usability test script (example).
  3. Recruit users and inform their consent. Provide a way for potential participants to sign up for the test. Pass along to participants an agreement explaining what participation will entail. Clarify any logistical expectations, such as screen sharing, and pass along links or files of whatever it is you’re testing.
  4. Run the tests. Moderators should verbally confirm with the participant that it’s okay to record the test, ask participants to think outloud, and otherwise remain silent. Observers should contribute to a rolling issues log. Engage your team in a post-interview debrief after each test.
  5. Discuss the results. Schedule a 90-minute collaborative synthesis meeting to discuss issues you observed, and any questions these tests raise concerning user needs. Conclude the meeting by determining how the team will use what it learned in service of future design decisions.

Example from 18F

Additional resources

Considerations for use in government

No PRA implications. First, any given usability test should involve nine or fewer users. Additionally, the PRA explicitly exempts direct observation and non-standardized conversation, 5 CFR 1320.3(h)3. It also specifically excludes tests of knowledge or aptitude, 5 CFR 1320.3(h)7, which is essentially what a usability test tests. See the methods for Recruiting and Privacy for more tips on taking input from the public.

Visual preference testing

What

A method that allows potential users to review and provide feedback on a solution’s visual direction.

Why

To align the established branding guidelines and attributes of a solution with the way end users view the overall brand and emotional feel.

Templates

Time required

4-12 hours for style tiles. 30 minutes per participant to get feedback.

How to do it

  1. Create iterations of a style tile that represent directions a final visual design may follow. If branding guidelines or attributes don’t exist, establish them with stakeholders beforehand.
  2. Interview participants about their reaction to the style tiles.
    • Ask questions as objectively as possible.
    • Align questions with the branding guidelines and attributes your project must incorporate.
    • As far as possible, allow participants to provide their feedback unmoderated or at the end of your research.
  3. Compare the results of your research with the agency’s published branding guidelines and attributes.
  4. Publish the results to the complete product team and decide which direction will guide future design efforts.

Additional resources

Considerations for use in government

No PRA implications. The PRA explicitly exempts direct observation and non-standardized conversation, 5 CFR 1320.3(h)3. See the methods for Recruiting and Privacy for more tips on taking input from the public.