Log
00:10 – Introduction given to Usability Tester
01:23 – Testing begins
01:34 Registration page begins. Tester completes each of the fields using the auto-complete function that is native on an iPhone. Tester also notes the “Help” button as “handy”. I am pleased that the help button has been noticed so soon, although the tester did not need to use it
01:49 – Tester confirms that the prototype is behaving as they would expect
02:07 – Tester reads onboarding process and swipes through without issue. I have overcome the communication difficulties that were a real struggle when paper prototyping!
02:22 – Tester visits the “Available Tests” screen and proceeds to the “Revision” page using their own initiative. They read the contextual statement and are using the application correctly as part of their problem solving
02:35 – Tester expected to be able to click on the images the see more information… Perhaps this is an area for future development. It would give me more space to explain individual rules to the user
02:39 – Tester swipes through the rules as expected. Much akin to the onboarding screen, the tester must be understanding the communication from the page indicator that the screen is paged
02:58 – Tester returns to the “Available Tests” screen and begins the first test. They are working to the contextual brief, and using the application’s tab bar alongside their own expectations to navigate the prototype
03:08 – The tester understands that they need to rotate their device to landscape mode, as the video is playing in landscape mode. This is communicating correctly. Unclear whether “push-in” animation communicated this to the tester, or the video frame itself
03:14 – I guide the tester through the hazard perception test process, as Adobe XD does not support video playback. Tester appears to confirm their understanding of what is happening through mention of the flags and confirmation of the hazard
03:39 – Tester reads their “Results” feedback and swipes through the paged screens, then finishes the test
04:14 – Tester provides feedback that the UI does not confirm that they have completed the test. I am already aware of this issue and have previously received feedback on this feature in a recent usability test. The feature, albeit useful, did not reach a high-enough priority threshold on the Design Hierarchy of Need and so I have not addressed it yet.
Summary
- Overall, the testers ability to navigate the application, despite only having a contextual statement, have pleased me. They move through the “Revision”, “Available Tests”, and “Results” section with ease, and were able to locate the necessary information efficiently
- Some unexpected behaviour occurred when the tester tried to tap on a rule in the “Revision” screen, but this did not derail their experience
- The tester swiped through paged screens as expected. I suspect that the onboarding process and page indicator element evidenced this possibility to the tester, and then they identified it correctly on subsequent screens
Actions
- Expand upon the “Revision” screens by making each rule tappable. A new screen could be made for each rule, which would permit me to have more space to explain and justify the rules
- Research for prototyping solutions that support video playback
- A second request for confirmation of completed tests means that I should probably address it sooner, even though it does not appear high-priority on the Design Hierarchy of Need.