Validate

Test a design hypothesis.

Multivariate testing

What

A test of variations to multiple sections or features of a page to see which combination of variants has the greatest effect. Different from an A/B test, which tests variation to just one section or feature.

Why

To incorporate different contexts, channels, or user types into addressing a user need. Situating a call to action, content section, or feature set differently can help you build a more effective whole solution from a set of partial solutions.

Time required

2–5 days of effort, 1–4 weeks elapsed through the testing period

How to do it

  1. Identify the call to action, content section, or feature that needs to be improved to increase conversion rates or user engagement.
  2. Develop a list of possible issues that may be hurting conversion rates or engagement. Specify in advance what you are optimizing for (possibly through metrics definition).
  3. Design several solutions that aim to address the issues listed. Each solution should attempt to address every issue by using a unique combination of variants so each solution can be compared fairly.
  4. Use a web analytics tool that supports multivariate testing, such as Google Website Optimizer or Visual Website Optimizer, to set up the testing environment. Conduct the test for long enough to produce statistically significant results.
  5. Analyze the testing results to determine which solution produced the best conversion or engagement rates. Review the other solutions, as well, to see if there is information worth examining in with future studies.

Additional resources

Applied in government research

No PRA implications. No one asks the users questions, so the PRA does not apply. See the methods for Recruiting and Privacy for more tips on taking input from the public.

18F

Product review

What

A final review determining the status of the delivered product, examining the timeline, issues, and budget in comparison to the original plan, alongside thoughts about necessary updates or changes.

Why

To see how the project plan held up, and any immediate needs for the product, before or after delivery.

Time required

Project Dependent

How to do it

  1. Form a project planning group for your team. This team will first determine the objectives and goals of the project, and once work begins, they will manage to see if the plan is being kept to.
  2. To begin making the plan, do an initial assessment of the issues that need to be worked through and the difficulties that may be encountered in the project.
  3. Whenever possible, involve the community to get feedback on their needs, whether through town halls or focus groups.
  4. As the project continues, manage the plan, and make adjustments if necessary.
18F

Usability testing

What

Observing users as they attempt to use a product or service while thinking out loud.

Why

To better understand how intuitive the team’s design is, and how adaptable it is to meeting user needs.

Time required

30 minutes to 1 hour per test

How to do it

  1. Pick what you’ll test. Choose something, such as a sketch, prototype, or even a “competitor’s product” that might help users accomplish their goals.
  2. Plan the test. Schedule a research-planning meeting and invite anyone who has an interest in what you’d like to test (using your discretion, of course). Align the group on the scenarios the test will center around, which users should participate (and how you’ll recruit them), and which members of your team will moderate and observe. Prepare a usability test script (example).
  3. Recruit users and inform their consent. Provide a way for potential participants to sign up for the test. Pass along to participants an agreement explaining what participation will entail. Clarify any logistical expectations, such as screen sharing, and pass along links or files of whatever it is you’re testing.
  4. Run the tests. Moderators should verbally confirm with the participant that it’s okay to record the test, ask participants to think outloud, and otherwise remain silent. Observers should contribute to a rolling issues log. Engage your team in a post-interview debrief after each test.
  5. Discuss the results. Schedule a 90-minute collaborative synthesis meeting to discuss issues you observed, and any questions these tests raise concerning user needs. Conclude the meeting by determining how the team will use what it learned in service of future design decisions.

Example from 18F

Additional resources

Applied in government research

No PRA implications. First, any given usability test should involve nine or fewer users. Additionally, the PRA explicitly exempts direct observation and non-standardized conversation, 5 CFR 1320.3(h)3. It also specifically excludes tests of knowledge or aptitude, 5 CFR 1320.3(h)7, which is essentially what a usability test tests. See the methods for Recruiting and Privacy for more tips on taking input from the public.

18F