Super-easy user testing
Online user testing can provide useful insights into different stages of a project, and *bonus points* it’s quite cheap and easy! Most of the places I’ve worked (i.e. government) have had minimal budget and time set aside for conducting user testing. So, I had to find alternative ways to test and support (or refute) my information architecture and design decisions.
My favourite online tools belong to Optimal Workshop: Optimal Sort, Treejack and Chalkmark. Treejack is the one I’ve used the most, so I’ll focus on it in this post.
Testing your site structure
I’ve used Treejack to test both the structure of whole sites, and specific sections of sites. It enables me to gather feedback from a large user base, internally and externally to an organisation.
With Treejack you input your site or content structure (it needs to be hierarchical, i.e. a tree) – either manually or by importing a file – and set tasks for participants to do. Treejack tracks their movements within the structure, while they’re completing the task, to demonstrate how people think the content is, or should be, structured. It can also show whether the terminology you have used makes sense to the users. If they’re not following the path you expect, then it may be that they don’t understand the wording you’ve used.
The fine balance between clear and vague
The easiest part is inputting the structure – the more difficult part is coming up with tasks that make sense to the participants but don’t give away the answer. It can be quite tricky to get the right amount of ambiguity, without confusing them.
I like to keep the words and names used in the questions as generic as possible. For example, when testing the Human Resources section of an intranet site, I set the following task:
You would like to take some time off over Christmas – where would you apply for this?
I decided not to use the word ‘leave’ in the question – i.e. You would like to take leave over Christmas… – as I knew it was a frequently used term. I wanted the test participants to think a little more about their answer – using the word ‘leave’ would have given them a hint as to where to go (even though it was a fairly obvious question, anyway). And, it was a way to double-check that ‘leave’ was actually a recognisable term to them.
The results of this task were interesting to me, as there was a 33% failure rate – a little too high for my liking. This, and the results from some of the other tasks, led me to question whether ‘Work Tools’ was the best top level heading.
Failing on purpose
In this next example, I used Treejack to test an existing site navigation hierarchy (for a site aimed at volunteer coordinators) – to prove to the owners that it wasn’t working.
They had put the ‘Resolving disputes’ section under the ‘Legislation’ heading, which I didn’t think made a whole lot of sense. By asking participants where they would look for the relevant information, I was able to demonstrate that most of them wouldn’t think to look under the ‘Legislation’ heading. In this case, the big, red 66% failure rate is what I wanted to see!
For this task, I was pretty happy with the results. There was an 82% success rate (including the indirect successes), and a low failure rate. The directness bar shows 70%, which indicates that most participants didn’t have to backtrack up the tree to find what they wanted. This was a good sign that the information was organised logically.
All these examples show only some of what you can do with Treejack. As well as tracking successes and failures, it also tracks how long the participant takes to find the required information, and how direct their path to it is, i.e. did they take some wrong turns along the way? It provides different statistics and graphs for all aspects of the test, and you can collect freetext feedback from participants. If you’re really keen you can download the raw data and do your own analysis.
Breadth vs depth
To be clear, I don’t think online remote user testing can, or should, fully replace in-person testing. It’s just a great option to have in one’s repertoire and the two methods actually work really well in combination. Doing both can provide a comprehensive understanding of how users view and use your site.
Whether or not an online test is right for a project depends on who you need to gather input from, the amount of feedback you need, and, of course, resourcing. Online gives breadth, while one-on-one or small group testing provides depth.