Wednesday, January 11, 2012

Happy users: information architecture via index cards

Users are happiest when your site's structure - its information architecture - matches the way they think about the problem space. Get insight into their thoughts using a card sorting task. You'll be surprised how different their perspective is from yours.

The way you lay out the information on your site or in your app can make the difference between confused and happy users. What items should be visible at the top level? How should navigation menus be grouped? Should the site map be organized by functions or concepts?

As with most design problems, users can't directly give you good answers to these questions. However, gathering data through a card sort can tell you a lot about the way they think about the world.

Card sorting is a great way to understand how users group the information and tasks on your site.

Donna Spencer wrote the definitive guide to card sorting back in 2004. Rather than try and repeat everything that she says, I suggest you read her article. I've adapted my card sorting methods over the years and here I'm going to describe a bare-bones technique that you can use to get fast results, without diving into the justifications for each change from Donna's base method.

Overview
Groups of three participants sort a stack of index cards into piles while you watch and listen. Each index card has a task written on it that people can perform on your site. Participants read each card one-by-one and then place them to create piles of similar tasks. When they are done, they write a name for each pile on a blank index card.

After they've sorted the cards you check for piles that contain too many cards (ask participants to sub-sort piles that contain more than around ten cards), and probe about areas that you heard the group discussing/arguing over during the sort. You can also ask if there are other tasks that the participants perform that you missed out. You can write these new tasks on more blank cards, and get the group to place the missing tasks in the correct pile.

After analyzing the sorted piles and the group names that several groups of participants created, you can arrive at a good approximation of users' desired information architecture for the site.

What goes on the cards?
Each card should have a task written on it in user-centric terms. Although you're interested in where to put the content on your site, what really matters is where people think they should go to get answers. In other words, where they go to achieve tasks. Later on, you can work out what content needs to be present in each location to ensure those tasks are achievable.

You can comfortably run anything from 30 to 150 cards in about an hour - which is the maximum time that participants will want to be involved. That's a lot of cards to hand-write. Instead, it's easiest to create Avery labels (Avery 5352 or similar, 2"x4.25", 10 per page work well) using a mail merge from a spreadsheet. It helps to print an unique code on each card (it can be just a number) so that you can quickly type in the results. Just make sure the code doesn't give hints as to how you think the tasks should be grouped.

Using printed labels speeds things up, keeps things legible, and adds consistency.

Try to keep each task as short as possible without being ambiguous. Make sure that your task wording doesn't imply a location for the task, and that you don't have multiple tasks that use the same phrasing or terms (participants will lump them together without thinking).

How many participants?
15-20 participants should give you sufficient confidence in the results. That's 5-7 groups of three. Obviously if you have different types of users, you'll want to have enough participants from each user type to see whether they think of the structure the same way or not.

You can run more than one group at once. Your limiting factor is the number of moderators that you have available. Each group needs someone to observe and take notes while they sort, and then refer back to those notes to probe on problem areas after the sort. One downside: the more moderators, the harder it is to compare notes during analysis.

Obviously it would be possible to run all the groups at once without moderators, but then you lose the qualitative data that lets you make decisions later on when you aren't sure which of two navigation menus to put a certain item in.

How do you collect and interpret data?
As soon as you finish each group card sort exercise, copy out each card's reference numbers into a spreadsheet. Remember to write the name that the group gave to each pile alongside the reference numbers for each pile. Remember too to write in the tasks that participants added to the sort.

The format that you use for typing in the data will depend upon what kind of analysis you plan on doing.

  • Eyeballing the data is the easiest but least precise technique. This will give you a general idea about the groupings that participants used, and the type of contents they expect in each group. It doesn't give you a very robust understanding of which items were consistently placed together by different groups of participants. 
  • Donna Spencer and Joe Lamantia both have example spreadsheet templates online, but I've found that unless you are the one who created the spreadsheet, it's hard to work out what the author's notation system is. 
  • Syncaps is a cluster analysis tool from William Hudson at Syntagm software. If you capture your data in the right format, you can use Syncaps to give you more insight into the clustering, and to output a dendogram. Dendograms are hierarchical maps showing the relationship between items in the card sort. They don't provide a one-to-one mapping with your potential menu structure, but they are a helpful way of seeing how users think. 
A dendogram shows how items were grouped (from optimalworkshop.com)

A similarity matrix shows clusters of cards that are often piled together (from optimalworkshop.com)

Which ever way you analyze the data, you'll see some clusters where there is obvious agreement among participants, and others where there is less agreement. If there are some items that have little agreement (they appear in different places for each group), or there is an obvious "other/miscellaneous" cluster, it might indicate that participants didn't understand the item, they don't care about it, or it really didn't fit with the rest of the site's structure and content.

Variations
The instructions so far assume you're starting from scratch to develop your information architecture. If you have an existing site and you aren't prepared (or able) to wipe out the current navigation structure, you might want to run a closed card sort. In this type of card sort, you make participants create piles based on group labels that you created beforehand (probably your existing menu labels). You might let them create one or two new piles and name those, but your main goal is to see how/whether participants can work with your existing structure.

The instructions also assume that you have physical access to participants. If you don't, there are Web-based alternatives that let you run card sorts remotely. Optimal Workshop's OptimalSort is my current favorite because of its built-in data analysis and potential to export to Syncaps and spreadsheets for further tweaking. Others are websort.net and userzoom.com. Before you consider rolling your own, check out NIST's free WebCat server-based implementation.

Online tools typically let participants do the sort individually in their own time, without you being able to listen in. Benefits include fast, easy access to more participants but the downside is you'll probably not be able to include so many cards - screen real estate becomes a big issue and remote users tend to be less motivated. You could always consider using online conferencing to share a desktop based card sort app like xSort (Mac) or UXsort (Windows) - both of these are free and provide built-in cluster analysis and dendogram output.

What's next?
Your card sort data tells you how users group tasks on your site. It doesn't tell you how to display those information groups. The information architecture created as the output from a card sorting exercise shouldn’t necessarily be implemented directly as a menu system. 

Knowing more about why users grouped things the way they did (the information you got from listening to them as they sorted) will help you decide how to display the different parts of the information architecture on the site or in your app. For instance you might make a distinction between site tools and site content, displaying each in its own menu. Or, you might  decide that news and events should form the basis of the site’s home page and thus potentially not need a main menu item. Similarly, support areas could either be displayed as a menu item or as links in the page footer.   

Once you've put together a draft of your structure, you can test it with users by doing a reverse card sort. This allows you to get quick feedback before you make any changes in code.


Creative Commons License  RSS  e-mail

No comments:

Post a Comment

Please keep your comments respectful, coherent, on-topic and non-commercial.