We selected three subjects for this round of usability tests. None had participated in our prior studies, although one subject had had some exposure to previous designs through our group website. It was important that the majority of the participants had a fresh eye for the system so that we could judge whether it was sufficiently self-explanatory for first-time viewers. We also learned that the participants were not familiar with Astra Web scheduling software, which, although not planned, may have been good for the study since there is some superficial resemblance between the two designs.
Participants represented the sophomore, junior and senior classes at Olin—the groups that are more likely to encounter room scheduling software in the near future.
Interviews were conducted in a quiet room in East Hall that was close at hand for everyone involved. We brought our laptops, note-taking supplies and enough printed copies of the release form, informed consent and information sheets. As usual, no electronic recording equipment was used, so that section of the release form was moot. Participants were seated in front of DJ Gallagher’s laptop, where the login page of our current interactive prototype was displayed in a browser window (we have used, and have encouraged evaluators to use, exclusively Mozilla Firefox to view the site, as it has been developed with that target in mind). Some adjustments in seating were generally necessary for the task performance portion of the interviews, to allow the three of us to effectively observe interactions with the prototype.
For each test user, the team first explained the nature of the HFID project, and allowed them to review and sign the IRB documentation. After attaining their approval to be interviewed, we asked the users a quick series of questions concerning their experience with scheduling software (both at Olin and elsewhere). We also attained some level of personal information regarding the individual’s experiences with Olin and activities here. Then, we gave them a basic idea of the functionality of the prototype we had developed, and asked them to both use their imagination for actions that were not quite implemented, and to also think aloud and let us know if the prototype ever behaved in a way that they did not expect it to.
After the users were prepared to use the prototype, we gave them a set of three tasks to complete. Since recurrence has yet to be implemented in the prototype, we did not ask the user to try to schedule a recurring event, but instead asked them to simply “play” with the prototype and think aloud as they performed self-defined tasks. The three scenarios presented to the users are below:
- Reserve a room an HFID meeting that you would like to have between 3-4PM on October 25th.
- You have received an e-mail indicating that one of your reservations has been overbooked by a faculty member. Change the time to 9-10AM to resolve the conflict. Notify attendees of your meeting (Michael Wu, Steven Krumholz, and Daniel Gallagher) of this change.
- Please experiment with the prototype for about 5 minutes, explaining what you are trying to do and how you would expect the interface to respond to your actions.
Two of the users completed the tasks in order. In order to experiment with the ease of use of the prototype, we asked one user to actually perform task 2 prior to task 1,that he would not have prior experience with how the prototype worked before diving into an advanced task. Finally, we asked users for additional comments on the prototype, and if there were any features or functionalities that the user expected, but did not find in the prototype.
For each of the tasks, we observed the users, took notes, and asked relevant questions.The first task was intended to test the basic functionality of the system – could someone that had no experience with the software successfully reserve a room? What steps did they take to complete the task, and did they make any mistakes? Did they follow an expected path, or deviate in some way? The second task was designed to test advanced functionality of the system. Would the user be able to interpret data feedbacks that they had not personally entered into the system (since the event information was pre-programmed)? Would the user be able to identify which meeting was in conflict, and change it appropriately? How effective was the “My Reservations” interface?
Finally, the third task was perhaps the most important task of all. It allowed the user have free access to any and every feature in the system. From this, we could observe bugs in programming, small interface changes that could make a large difference in accessibility of a feature, and features that were neither intuitive nor useful in their current state. This task, more than the other two, contributed to significant improvements in our interface.