Awareness

If you don't know what you don't know, start here.

Comparative analysis

What

A detailed review of existing experiences provided either by direct competitors or by related agencies or services.

Why

To identify competitors’ solutions that excel, are lacking, or are missing critical design elements. Comparative analysis can give you a competitive edge by identifying opportunities, gaps in other services, and potential design patterns to adopt or avoid.

Time required

1–2 hours to analyze and write an evaluation about each competitor.

How to do it

  1. Identify a list of services that would be either direct or related competitors to your service. Pare the list down to four or five.
  2. Establish which criteria or heuristics you will use to evaluate each competing service.
  3. Break down the analysis of each selected competitor into specific focal areas for evaluation. For example, how relevant are search results?
  4. Use a spreadsheet to capture the evaluation and determine how the targeted services and agencies perform based on the identified heuristics.
  5. Present the analysis, which should showcase areas of opportunities that you can take advantage of and design patterns you might adopt or avoid.

Example from 18F

Considerations for use in government

No PRA implications. No information is collected from members of the public.

Content audit

What

A listing and analysis of all the content on an existing website (including pages, files, videos, audio or other data) that your users might reasonably encounter.

Why

To identify content that needs to be revised in new versions of a website. Content audits can also help you identify who is responsible for content, how often it should be updated, and what role a particular piece of content plays for users.

Time required

3-8 hours

How to do it

  1. Identify a specific user need or user question that you’d like to address.
  2. Create an inventory of content on your website. Navigate through the site from the home page and note the following about every piece of content. (For repeated items like blog posts, consider capturing just a sample.)

    1. Title used in the site’s navigation for that page
    2. Title displayed on the page or item itself
    3. URL
    4. Parent page
  3. Identify the main entry points for the user need you’re addressing. This could be external marketing, the homepage, a microsite, or another page.
  4. From each entry point, trace the pages and tasks a user moves through until they address their need.
  5. For every piece of content they might come across on that task flow, note:

    1. Author(s): who wrote or created the page
    2. Content owner(s): who ensures its credibility
    3. How often or when it was last updated
    4. Comments: qualitative assessment of what to change to better address your identified user need

Example from 18F

Additional resources

Considerations for use in government

No PRA implications. No information is collected from members of the public.

Design hypothesis

What

Framing your work as a hypothesis means not only thinking about the thing you’re making or building, but considering whether that work is achieving your intended goals and outcomes. This means thinking about your work as a series of experiments you conduct with your users to learn if you’re on the right path. Instead of asking “Did we ship the shopping cart feature?” you ask: “Did we make it easier and simpler for our customers to buy from us?”

Why

When done collaboratively, hypothesis-building is powerful at getting a team on the same page about what it’s doing and why. It also allows the team to be flexible — if one approach doesn’t result in the outcome you expected, you have implicit permission to change course and try something else.

Templates

Time required

1-2 hours

How to do it

  1. As a team, identify and make explicit the problem you’re trying to solve. What goals or needs aren’t being met? What measurable criteria would indicate progress toward those goals?
  2. As a team, write out the hypothesis for the work you want to do to address the problem(s) you’re trying to solve. You may want to write broad hypotheses at the outset of a project and more specific hypotheses each sprint.

    Here’s a common way to structure your hypothesis:

    We believe that doing/building/creating [this] for [this user] will result in [this outcome].
    We’ll know we’re right when we see [this metric/signal].

  3. Once you’ve formulated your hypothesis, consider the following harm prompt to help the team think about and guard against potential unintended consequences of your work.

    But, this could be harmful for [this user] if [this outcome happens].

  4. Identify a user touchpoint that will allow you to test your hypothesis, such as external marketing, the homepage, a microsite, or something else. Test your hypothesis. If you learn something unexpected, refine your hypothesis, test again, and continue to work incrementally towards your goals.

Additional resources

Considerations for use in government

No PRA implications. No information is collected from members of the public.

Heuristic evaluation

What

A quick way to find common, large usability problems on a website.

Why

To quickly identify common design problems that make websites hard to use without conducting more involved user research.

Time required

1–2 hours

How to do it

  1. Recruit a group of three to five people familiar with heuristic evaluation methods.
    • Evaluators do not necessarily need to be designers, but they should be familiar with common usability best practices.
    • Ideally, they are not users or overly familiar with the site being evaluated.
  2. Provide a set of recognized heuristics for the group to evaluate the site against.
  3. Have each person evaluate the website using the provided heuristics and document problems.
    • Optionally, have them capture a severity level and potential solution (if apparent) for each problem.
  4. Review the data and prioritize problems to address.

Additional resources

Considerations for use in government

No PRA Implications, as heuristic evaluations usually include a small number of evaluators. If conducted with nine or fewer members of the public, the PRA does not apply, 5 CFR 1320.5(c)4. If participants are employees, the PRA does not apply. See the methods for Recruiting and Privacy for more tips on taking input from the public.

Hopes and fears

What

An exercise that quickly surfaces a group’s hopes and fears for the future.

Why

To establish a baseline understanding of a group’s expectations and concerns about a project and to give each person an opportunity to voice their perspective

Time required

30–60 mins

How to do it

  1. Ahead of the session, establish what you want to elicit hopes and fears about. For example, you could ask participants to focus on the whole project or that day’s workshop.
  2. At the beginning of the session, create two columns labeled “Hopes” and “Fears” on a white board or large sticky pad. (In a remote setting, you can do this online using collaboration software such as Mural or Google Docs)
  3. Ask participants to take 1-2 mins to write down their hopes on sticky notes (one hope per sticky note).
  4. Invite participants to come up one at a time and add their “hopes” sticky notes to the board and say more about what they wrote. Have participants group their sticky notes as they add them to the board to illustrate emerging themes.
  5. Repeat steps 3 and 4 with fears.

This format can be adapted to include other categories. For example, asking participants to write down skills and experiences can help contextualize each person’s place in the group.

Example from 18F

Additional resources

Considerations for use in government

No PRA implications. No information is collected from members of the public.

Page quality audit

What

An automated assessment report of factors that determine web page quality, including performance, accessibility, search engine optimization, and coding best practices.

Why

To quickly identify issues that impact the user experience of a website so improvements can be prioritized and addressed before doing more in-depth research. This is useful to run at the beginning of a project to establish some baseline quality metrics and as part of quality assurance testing for new work being developed.

Time required

1-2 hours

How to do it

  1. Run the audit.
    • Go to web.dev/measure.
    • Enter the address of the page you want to audit and press Run Audit.
    • It may take a minute or two to finish running.
    • Press the View Report link to see details about the issues that need to be addressed.
  2. Share the results with your team.
    • The View Report link is a unique address you can bookmark to return to this version of the report later or share it with others. You can also download the report as an HTML file to view offline.
    • Have team members provide effort estimates for issues in their area of expertise. There may be issues relating to content, design, and development across audit categories.
    • There is no way to export the report as a spreadsheet. Review results with the team and capture issues in your work tracking system as appropriate.
    • If you log in with a Google account, Measure will save each audit so you can track changes to a page over time.
  3. Work with your team to prioritize fixes.
    • Each of the problems identified impacts the overall user experience. Facilitate discussion among the team about which issues to prioritize based on the effort required and impact achieved.
  4. Audit additional pages as needed.
    • In addition to the homepage, try a few other template pages, like section landing pages or article pages. Page quality will vary based on content, but focusing on a few key templates give you a sense of issues occurring across the site.

Additional resources

Considerations for use in government

No PRA implications. No information is collected from members of the public.

Stakeholder interviews

What

A wide-spanning set of semi-structured interviews with anyone who has an interest in a project’s success, particularly client representatives or other internal stakeholders. (For interviewing actual users of the service, see User Interviews in Observation.)

Why

To build consensus about the problem statement and research objectives.

Templates

Time required

1–2 hours per interviewee

How to do it

  1. Create a guide for yourself of some topics you’d like to ask about, and some specific questions as a back up. Questions will often concern the individual’s role, the organization, the individuals’ needs, and metrics for success of the project.
  2. Sit down one-on-one with the participant, or two-on-one with a note-taker or joint interviewer, in a focused environment. Introduce yourself. Explain the premise for the interview as far as you can without biasing their responses.
  3. Follow the conversation where the participant takes it. They will focus on their priorities and interests. Be comfortable with silences, which allow the participant to elaborate. To keep from getting entirely off course, use your interview guide to make sure you cover what you need to. Ask lots of “why is that” and “how do you do that” questions.
  4. If there are other products they use or your product doesn’t have constraints imposed by prior work, observe the stakeholders using a competing product and consider a comparative analysis.

Additional resources

Considerations for use in government

No PRA implications. The PRA explicitly exempts direct observation and non-standardized conversation, 5 CFR 1320.3(h)3. See the methods for Recruiting and Privacy for more tips on taking input from the public.

Top tasks

What

A robust method that helps you zero in on what really matters to a majority of your users. Ultimately, it helps you identify a short list of 10 or fewer tasks that users are trying to get done on your site, as well as identify their less important tasks (i.e. “tiny tasks”).

Why

Focusing on improving the experience of the top tasks your users have on your site means you will be serving most of your users well. This method is great for lean teams who need to know where to focus their energy when it comes to improving UX.

Time required

Around 12 weeks (assuming your project qualifies for fast-track PRA clearance).

How to do it

  1. Long task list: Create a long list (ex: 150-500) of tasks by consulting various data sources:
    • Behavior on the site - Google analytics, Site search queries (date range: 12 months)
    • Customer feedback channels - What are customers talking or asking about?
    • Stakeholder interviews
    • Review available past UX research
    • Comparative analysis of sites of similar products
    • Review every page of the site you are creating the task list for

    As you review the data, ask yourself what users may be trying to do on your site.

    Capture your tasks in a spreadsheet. The spreadsheet should document the task, the general category it belongs to (this helps you organize your task list and makes it easier to trim the list later), the source of where you got that task, and notes.

  2. Short list: Trim the list to 70 or fewer tasks:
    • Consolidate duplicative tasks
    • Work with stakeholders to shorten the list to no more than 70 tasks and finalize wording of tasks (consult task wording best practices). This can be done with collaborative workshops using a whiteboard tool.

    Plan for it to take 3-5 weeks to get to a final short list.

  3. Survey: Get feedback from users. Ask users to vote on their most important tasks via survey. Collect other relevant data to cross analyze top tasks with those data points (for example, ask participants how long they have used the product. That way you can determine top tasks of new users vs. seasoned users).

Additional resources

Considerations for use in government

Top tasks works best with 100 or more survey participants, so you will need PRA clearance. See if your project qualifies for the fast-track PRA clearance process.