The Usability Report is an essay where you document a usability problem, discuss potential solutions, and back up your findings with a discussion of the relevant literature.
The original specification reads as follows:
“Choose a web site, a piece of software, an app, or a small gadget of your choice, and choose what aspect of usability you want to focus on, and how you will measure it. Measure this aspect, discuss your findings, and suggest improvements.”
The most important advice I can give you is to focus on a small aspect of usability that you can document well, and for which you know the literature.
Example: Viewing survey results in the LEARN instructor interface versus Surveymonkey. In LEARN, the survey results are difficult to find, and options for viewing results are hidden in a dropdown menu. In Surveymonkey, you can access results via an icon straight from the list of available surveys.
The second most important piece of advice is to illustrate what you mean by using clearly annotated pictures and screenshots.
Example: I haven’t provided any screenshots with this post. How difficult is it for you to get an idea of the difference between the interfaces?
Suggested Structure
Introduction – what do you investigate and why does it matter?
Example: Surveys are a useful way of getting feedback from students. I have chosen to compare the survey functionality in LEARN to the survey functionality of another commonly used survey tool. Surveys in LEARN are part of the student interface, whereas using Surveymonkey requires an external link. Specifically, I am interested in how easy it is to track results (Note: your actual introduction would be longer and more specific than this!)
Background – context, user group, existing research, if any – both on the application and the usability aspect you will be considering
Example: Context is getting feedback on the course or on course design choices, the user group is instructors, existing research could be on feedback from students to teachers and on learning environments. The angle I’ll take focuses on mental models and consistency between interfaces versus consistency within the same interface
Method – how you investigated your aspect. Provide enough detail that somebody else can replicate your study.
Example: I asked four instructors to take four different surveys implemented in LEARN and SurveyMonkey and report on the results of one LEARN and one SurveyMonkey survey. I timed how long it took them to look up the results and interviewed them about their experiences (Note how many other details you’d need to be able to replicate the study I suggest here.)
Results – what you found. descriptive statistics. Key themes of interviews
Example: instructors took twice as long on LEARN as on SurveyMonkey. They found LEARN very fiddly. (Note: your actual results section would be both longer and more professionally written than this!)
Discussion – how does what you found relate to the literature on the subject? What improvements could be made?
Example: While the LEARN approach is internally consistent within the Learning Environment, it is inconsistent with commonly used survey tools, and suboptimal when it comes to administering and looking up the results of surveys. This can be substantiated by [REFERENCES]. A redesign of the LEARN grade centre is recommended
For the references, if you don’t know what style you should be using, use APA style.
Any Questions?
Please post them in the comments.
If you want feedback on your own plans, email me, but be aware that I may post that feedback as an update to this post.