RocReadaR : RocReadaR Portal Usability Testing Plan
Specific usability goals allow for the creation of evaluation scenarios and tasks to measure if we are meeting those goals, and what measures can help us determine if in fact the participants are having trouble completing the tasks. As metrics are only a minor part of the scope of the Senior Project team, we will focus on the general usability of the application. Future testing should focus heavily on usability of metrics, including if users can find metrics they are interested in and read them effectively.
- Is the purpose of the system easily understandable?
- Can users successfully navigate through the application?
- Is the information logically organized and grouped for the user? Can they easily located the information they are looking for?
- Can the application be used without extra help?
- Are there tasks that users will want to perform that are not currently supported by the RocReadaR portal?
- Are users able to easily understand the account creation system?
- Are users confused by the difference between recognition and media images?
A single usability evaluation will be run for each participant. Each session will consist of a set of tasks and an pre-test and post-test questionnaire for the participants to complete. The individual evaluations will take place in the following order:
- Introduction of participant and evaluation monitor
- Each participant will be personally greeted by the evaluation monitor to help them feel comfortable and relaxed.
- Signing of Informed Consent form
- The issue of confidentiality will be explained, and participants will be asked to sign statements indicating their agreement to volunteer in the evaluation
- Pre-test questionnaire
- Participants will be asked to fill out a short background questionnaire.
- The participants will receive a short, verbal scripted introduction and orientation to the evaluation. This will explain the purpose and objective of the evaluation and any additional information about what is expected of them. They will be assured that the product is the center of the evaluation and not themselves, and that they should perform in whatever manner is typical and comfortable for them. The participants will be informed that they are being observed.
- Performance Evaluation
- Post-test questionnaire (Debriefing Session)
After all tasks are complete or the time expires, each participant will be debriefed by the evaluation administrator. The debriefing will include the following:
Completion of a brief post evaluation questionnaire (Post-test form) in which the
participants share their opinions on the product’s usability, appearance of
application pages, and general impressions of the product
Participant’s overall comments about his or her experience
Participant’s responses to probes from the evaluation monitor about specific errors or problems encountered during the evaluation
The debriefing session serves several functions. It allows the participants to say whatever they like, which is important if tasks are frustrating. It provides important information about each participant’s rationale for performing specific actions, and it allows the collection of subjective preference data about the application and its supporting documentation. After the debriefing session, the participants will be thanked for their efforts, and released.
Data will be collected using subjective cues and a timer. Measure to be collected include the following, recorded on the Performance Evaluation Observation sheet:
- Mean time to complete each task
- Percentage of participants who finish each task successfully
- Number of cases where participants were not able to complete a task due to an error from which they could not recover
- The number of positive statements about the system
- The number of negative or critical statements about the system
- The number and types of errors, including
- Observations and comments - The observer notes when participants have difficulty, when an unusual behavior occurs, or when a cause of error becomes obvious
- Non-critical error - A participant makes a mistake but is able to recover during the task in the allotted time.
- Critical error - A participant makes a mistake and is unable to recover and complete the task on time. The participant may or may not realize a mistake has been made.
- The number of subjective opinions of the usability and aesthetics of the product expressed by the participants
- The number of times the participant asks for help
- The number of times the evaluation administrator assists the participant.