DM7917 Week 5 – Usability Test 1 Report and Video

A video and audio recording of Usability Test 1

Log

  • 00:10 – Introductory script read to test participant  – I lost my place while reading the script, so deviated from the script momentarily. This could affect the reliability of my user tests as this introduction is not consistent.
  • 02:20 – Testing process begins with a brief explanation of the app. This should be included in the introductory script
  • 02:36 – First explanation of what the participant is seeing. This gives some context. “You have just opened up the app, and the app is going to load”
  • 03:00 – Tester is shown first screen and asked an open question. “What would you expect to do on this screen?”
  • 03:07 – Tester would expect to use University credentials to log in to app. Perhaps using Microsoft Single-Sign on, in line with other University online platforms
  • 03:29 – First task objective given to tester, “Create a new account”. 
    • I ask “How would you interact with this screen?”. Tester presses the “New User” button as expected
  • 03:45 – Loading screen is shown to give me time to change the paper “screens”. I incorrectly refer to it as a “loading screen,” when “loading animation” would be more accurate
  • 04:06 – I ask “What would you do on this screen?”
  • 04:15 – Tester expresses wish for “chosen password manager” to integrate with the password fields during the registration process
  • 04:41 – Tester asks “So this doesn’t require an email?” – I hadn’t thought about users using their email address to create an account. Thought: Why do companies require email addresses and email address verification? Research this later.
    I didn’t know what to say, my response was “OK, excellent, that’s a good thought. For the sake of this test, at the moment there is no email address required, but it’s a good point.”. What could I have said different?
    04:55 – To get back on track I asked the user how they would interact with the on-screen prompt.
  • 05:30 – Tester is presented with the “Instructions” screen and asked “What are you thoughts on the screen that has just appeared on screen in front of you?”… While setting up the screen I realised that the overhanging extra pages on the test screen would reveal to the tester how they must interact with the screen, rather than only relying on the contents of the screen within the cardboard cutout…
  • 05:35 – Tester assumes that the picture box with a cross (a standard symbol for an image on a wireframe) is something interact with, and tries to tap on it. I later discovered that this is because he was trying to follow the written instructions below the image “Tap a test icon to start a test”… This took me by surprise and I did not know how to react. I tried to stay on course with how the app would respond; I told the tester that nothing has happened following the tap on the image and clarified why nothing had happened… I wasn’t sure that they understood the symbol for an image
  • 06:00 – Tester decides to access the menu screen by pressing the “Hamburger Menu” icon. At this point I realised that I should have given the tester a task objective. He was leaving the “Instructions” screen potentially as I hasn’t given him an objective, so was now wandering the app with no objective. In my mind it was too late now, but I was curious about what may happen, as this is a learning experience
  • 06:20 – Tester presses the “About” page. I hadn’t prepared this page, as I didn’t expect the test to go off-track! I explain this to the tester
    The tester asks what would be on the “About” screen, which I also did not expect
  • 06:45 – User presses the “Start a Test” button in the menu
  • 07:23 – “Start a Test” screen is presented. Tester understands that they can scroll up and down the screen and tap to start a test. “I assume that is where I hit the bottom [of the screen]. Thought: Do I need to make this clearer?
  • 07:50 – I insert a landscape oriented screen in the portrait position, as I assume that this is how the user will still be holding their device. I hope that this will prompt the tester to rotate their device, which they do.
  • 08:24 – Tester wants to clarify whether the screen will rotate 180 degrees if the user rotates their device. I respond positively as that is my intention
  • 08:49 – I ask the user whether they’ve completed a hazard perception test before… I would have explained how the test works if they had not.
  • 08:53 – Tester interacts with app as expected when a hazard arises… They tap on the screen. Perhaps I should ask the tester what they think the flag means?
  • 09:21 – I ask “Is that what you’d expected that screen to behave like, or not?”… An open question, adding “or not” helps to stop the question becomes a “leading” question
    • Tester: “The flags indicate what I’ve said is a hazard… Whether or not I’m right at this given time, I assume it will tell me at the end”. Tester is expecting feedback after the test.
  • 10:04 – I present the results, in the current orientation of the device. The tester rotates the device as expected
    • I ask “What are you thoughts on this screen?”. The tester responds “I assume I didn’t get five stars” even though it is written on the screen that they did. Maybe I should ask the tester to read the feedback before responding? OR, perhaps I need to show an image on five stars so that they DO NOT have to read?
  • 10:32 – Tester presses “Continue” which takes them back to the “Start a Test” screen
  • 11:00 – Tester did not expect to see the “Start a Test” screen. “I would have expected [the screen] to have more extrapolations on the hazard it was highlighting, potentially, especially if I had missed one”
  • 11:32 – If [the app] were to go automatically back to [this screen], even if I had passed with flying colours, rather than [the button] saying “continue”, [it should say] “end test” or “finish”. That would make more sense because I would be confused. I would feel I have done something incorrectly, coming back to this after pressing “continue”
  • 11:50 – Tester pressing the hamburger menu button. We’re going off-track here again, as I’m failing to give the tester any objectives. He tries to see the settings screen, which I hadn’t made yet, as it wasn’t supposed to be part of this test! I explain this, but am keen to hear the testers thoughts on the screen
  • 12:25 – Tester is looking for a notification to show which hazard perception tests they have passed or didn’t pass. I did not include this in the second event of the “Start a Test” screen. This is easily remedied and a very good point for allowing the user to see their previous scores and improve upon them!
  • 12:59 – I ask an open question to the tester, inviting any final thought about the test
    • Tester explains that they would expect the “Studio Rules” page to be a non-interactive list of rules, but be more confused if there were more tests on that page that are not included on the “Start a Test” page. I agree, and have not planned to include any tests there.
    • Tester also explained that when clicking on a test, it would be nice to be told what they are looking for and why they are looking for it. I’m unsure whether this would be a good idea (from an assessment perspective) as it may give away the answer to the test so will need to give this some thought
    • Tester also explains that feedback would be more useful if it explained the outcome of missing a hazard.

Summary

  • I must practice keeping the tester on-track and guiding them to test the screens that I need to have tested
    • Allowing the tester to wander the app without an objective  did allow me to test their natural flow through pages of the app, however without objectives some pages were missed
    • I was pleased with my attempts to ask open questions that were not leading the tester to an answer
  • The hamburger menu button on the “Instructions” screen permitted the tester to avoid reading the instructions
  • The tester tried to act out the first instruction using the image above it. Perhaps I could improve the “onboarding” process by: 
    • a) Rephrase the instructions, 
    • b) include a moving animation for each picture to be more clearer than it is not interactive, 
    • c) make the picture interactive for the user, 
    • d) remove the “Instructions” page and opt for a breadcrumb trail throughout the app for first-time users
  • An “Under Construction” screen could appear more professional than explaining “I don’t have that screen yet” when a tester goes off-track

Actions

  • Require an email address and password, rather than a username and password, on the “Registration” page
    • Add option to log-in using University credentials
  • Add Stars to the “Results” and “Start a Test” screen for users to glance at and see their performance(s)
    • Provide more detailed feedback for the hazard within each test, perhaps on the “Results” screen, including what could happen if the hazard was not noticed
  • Change the “Continue” button at the bottom of the “Results” screen to either “End Test” or “Finish”
  • Remove the Hamburger menu button from the “Instructions” screen
  • Rephrase the instructions screen – perhaps remove command words from the beginning of each sentence
    • Maybe reconsider wayfinding can to inform users where they are in a process, such as “Step 3 of 5”

Leave a Reply

Your email address will not be published. Required fields are marked *