Monday, November 7, 2011

Stay user-focused during development

You kicked off the project with a Design Thinking session. Now you've started development, run fast, cheap tests to stay user-focused.


A one-week Design Thinking exercise is a great way to kick off the project, but once you start to build the product, you'll run into many more issues. Here are some techniques for staying on track to deliver real user delight.

Incremental user data
You are building your product in increments, so do the same with your user data. Aggregating the results from frequent small, fast, studies helps you to check you're on track.
  • Each piece is cheap and fast
  • Each piece answers specific questions that are preventing the team from moving on
  • In aggregate, the observations back each other up and provide the reliability you need
One of the best ways to get the team really engaged in user research is to show them how it impacts their work. What questions do they have? How would they propose answering them? Making it personal means that the whole team will then want to be involved in observations and interpretation of the results. This makes them all more user-aware. It also helps them see that some questions don’t get answered in one go. Instead you chip away at the question piece by piece, and there’s a cost-benefit trade off to each piece of research that you do.


User Studies
Traditionally, usability studies seem to take ages to plan, conduct and report back on. Not so with agile usability work.
  • You can cut down the planning time by setting up constant, repeating revolving door user tests that happen on a regular (per-sprint) basis.
  • Online user testing gives you access to many users for little money. There's a corresponding drop in the quality of the results you get, but with sensible test design you're likely to find that those results are still good enough.
  • Conducting in-person user studies can get results faster using RITE techniques. This usability testing technique was first described by Michael Medlock, Dennis Wixon and their colleagues at Microsoft. The beauty of their work was that it formalized a process that we'd all been doing for years but all felt very guilty about - namely changing the code being tested in a usability session between participants. As long as you follow a couple of rules, you can maximize the benefit and learning opportunities from each study you run.
  • Reporting back was always usability's weak point. Who wants to read boring usability reports? Luckily, by getting the whole team to be observers for the studies (RITE again), you can reduce the reporting period from weeks to hours. The team should leave the debrief after the last user session knowing what changes need to be made, and how to make them.    

User-centered techniques that don't need users
Inspection methods are easy tools to use with a team. It helps if the session moderator has at least some background in design or user experience, but any team can use them. The team walks through the UI in a structured way and notes down issues that need to be resolved.
Either tool can give you good feedback on a task flow in as little as one hour. Formally checking up on the state of the UI as a team every sprint can stop you from making design errors that cost a lot of rework time down the line. 

Second hand user data
Once you have some early beta code, you can start getting data from the field. With a little bit of tweaking, data that you'd be collecting anyway can be used to help you make better usability decisions.
  • Server log files and instrumentation metrics can be used to track usage, errors and abandonment. 
  • Data on user actions with the site or product (registrations, purchases, frequency of use, etc.) help you work out which areas of the interface are bottlenecks and need most improvement.
  • Support call issues and marketing survey results can get you more qualitative information that will help narrow down the reasons behind the issues, and what you should do about them. 
  • Once you have larger numbers of users, A/B testing can help you work out which designs perform better.
  • If you've got an existing product in the market, or if you are entering a space with established players, you can check out what forum users and bloggers say about yours or your competitors' products.
Repurposing this second hand data is easier if you consider your usability research questions before you field a market research survey, capture instrumentation, or set up your support structure. Often, some simple tweaks can give you hard numbers that resolve big questions for the team.

Combine the results for more confidence
Some methods tell you what is wrong. Some tell you why it's wrong. For instance, metrics tell you where the big issues lie, but it's hard to work out from the numbers exactly why there's a problem. User testing tells you why something's wrong, because you get to watch it unfold in front of you. Sometimes however it's hard to know just how prevalent the problems that you see would be in normal usage.
  • Combining metrics with user testing gives you more confidence that you are fixing the big issues, and that you're fixing them properly
  • Having seen user test participants' reaction to the product helps you anticipate their responses better when you run a cognitive walkthrough

Work doesn't stop when you releaseThe best user research data often comes from stuff that you thought was finished. Making users happier is sometimes a case of fixing existing UI rather than building more of it. Even after the team has finished coding a chunk of stories, they can learn a lot about how to design their future work by seeing how users respond to the code in the wild.


Creative Commons License 
RSS  e-mail

No comments:

Post a Comment

Please keep your comments respectful, coherent, on-topic and non-commercial.