Categories
competitor testing observation user testing

Usability testing competitor products

Don’t be shy – run studies of your competitors’ products to learn how well their software supports users’ tasks.

Teams often shy away from observing users work with competitor products. Unless your lawyers have specifically instructed you to steer clear, there is a lot to be learned from a competitor usability study.

It should be clear that you don’t need to run a usability study if you’re just trying to copy a competitor’s interface. You can copy it just by playing with it at your desk. Instead, by running the study you’ll be learning from the issues that you see users face with the software as they work through it.

Interaction design, terminology, task flow, navigation mechanisms and layout can all cause issues. Better that you find them in someone else’s design before you bake the same problems into yours. If anything, competitor user testing gives you the reasons to NOT copy someone else’s designs! 

When I worked at Microsoft we evaluated the out-of-box experience (from receiving the device to getting it up and running) for several peripherals and PCs, including OS-X and Linux based systems. It was interesting to see users struggling with the same concepts regardless of operating system or manufacturer logo.

Although Microsoft is sometimes accused of copying other interfaces, I know that no copying happened as a result of these user tests; the Microsoft interfaces were typically already too far along to be unduly influenced by these results. Instead, the studies gave us a great understanding of users’ comfort level with technology. They helped us to understand whether problems with the software were more likely due to our implementation or to users’ level of familiarity with key concepts. Surprisingly, they also gave the team some morale boosts when it became clear that the “it just works” claim of one competitor wasn’t always true.

Being impartial

Competitor studies require some additional focus during planning.

  • Choose a neutral location to host the study. If possible, watch users working with the software in their own environment, or at least with their own data. If you run a lab-based study in your office, participants may perceive that you want them to criticize the software.
  • Ensure you’ve got the product set up properly. It won’t be as easy to troubleshoot someone else’s product on-the-fly (although you may get a good indication of their helpdesk’s efficiency).
  • Use the same tasks as you would for user tests of your own software. This gives you a good comparison point. Keep the task wording the same wherever possible.
  • Retain whatever elements of impartiality you can. Capture solid metrics like time on task or success rate so that you can make clear statements about the product. 
  • Never lie. You can tell participants that you are researching a variety of different products. If they ask who you work for, tell them. Be prepared to answer their subsequent questions.

To keep the impartiality as high as possible and to avoid potentially uncomfortable questions from participants, it might be better to hire a neutral third party to run this type of study. A good vendor will be able to incorporate your requirements (such as task wording, desired metrics, or areas of focus) and will invite you to observe the sessions. However, because they have no vested interest they won’t subconsciously bias the participants with leading questions or body language.