Week 6: Usability Assessment Techniques

Learning Outcomes

Understand:

  • Guidelines encapsulate what people have learned about user-centred design so that others can apply findings easily to their own work / evaluate other people’s work.
  • Guidelines that are used in heuristic evaluation vary greatly in the level of detail
  • Guidelines that are part of standards usually come with implementation recommendations
  • There are many ways of measuring aspects of usability – it is up to you to choose the right one for the context

Remember:

  • definition of heuristic evaluation
  • at least one source of guidelines
  • ways of measuring effectiveness
  • ways of measuring efficiency
  • ways of quantifying user satisfaction
  • task analysis

Apply:

  • Perform a basic usability assessment of a simple task
  • Apply guidelines to the evaluation of an interface

Preparing for the Lecture

Usability assessment can be done in many different ways, depending on what aspect of usability is measured. The key question is always whether a system is fit for purpose. The Informatics Human Computer Interaction course goes into a lot of detail there; here, we will focus on basic skills that you can use to obtain quick assessments in practice. We will revisit ideas and techniques from this lecture repeatedly throughout the next few weeks, and you will probably use one or more of them in your final Usability Report assessment.

When performing a usability assessment, think first about what you would like to find out. Statements such as “whether this system is usable” are essentially meaningless, unless you know in what context, for what purpose, and by whom the system will be used.

For example, when assessing learning environments, bad questions would be “Do users like the system?”, or “Are the assignment submission systems user-friendly?”. Good questions would be “How quickly can students who have been using the system for several courses find their grades?”, “At what stages are students likely to make mistakes when submitting an assignment?” or “Do students receive clear feedback about successful submission of an assignment?”

Task Analysis

So, before you embark on a usability assessment, make sure that you know what tasks you are going to focus on, and who performs them in which context. We’ve already talked about user characteristics and user contexts in the first weeks of the course, so let’s now focus on task analysis.

Essentially, task analysis involves a deep dive into what people need to do to perform a task, and the key to a successful task analysis is to be as detailed as possible. This example from Applied Behaviour Analysis shows the complexity of even simple tasks, and if you want to know what’s involved in opening email, look at this analysis.

Evaluation Techniques

If you are looking for a good introduction to actual evaluation techniques, usability.gov is a great resource. Most of the papers cited are from the User Experience community, which basically is the community of usability / interface design practitioners, and will give you very practical hints on how to implement particular techniques.

In particular, I would like to point you to the System Usability Scale, which is the questionnaire I always recommend when you are looking for a quick questionnaire that tells you how users view a system.

Heuristic Evaluation

The basic idea behind heuristic evaluation is that many usability problems can be spotted early if you apply lessons learned both from evaluation experience, and from what we know about human computer interaction. Indeed, many researchers see a contribution to guidelines as one of their key outputs  – we don’t tell you how exactly to design your solution, but here are some issues you should consider when you design a particular type of solution for a particular user group. We have already encountered an example in the week on Accessibility (WWW Consortium Accessible Design guidelines).

The classic guidelines are Jakob Nielsen’s user interface design guidelines, which can be applied well beyond the actual user interface. Have a read through them – they are also documented at Nielsen’s own web site.

Core Readings

Chapter 13 in the textbook is the core reading for today. I also encourage you to revisit Gilbert Cockton’s piece on Usability Evaluation from Week 1, because in there, he has a list of commonly used metrics.

For designers who would like to know more about task analysis, I can recommend this web resource by Don Clark. You will see that task analysis carries over into many aspects, not just education, but also product design.

 

For those of you who have already taken the HCI course, I recommend this critical take on Heuristic Evaluation by Anganes et al.: Amanda Anganes, Mark S. Pfaff, Jill L. Drury, Christine M. O’Toole; The Heuristic Quality ScaleInteract Comput 2016; 28 (5): 584-597. doi: 10.1093/iwc/iwv031