RocReadaR : RocReadaR Portal Usability Testing Plan

Usability Evaluation Goals

Specific usability goals allow for the creation of evaluation scenarios and tasks to measure if we are meeting those goals, and what measures can help us determine if in fact the participants are having trouble completing the tasks. As metrics are only a minor part of the scope of the Senior Project team, we will focus on the general usability of the application. Future testing should focus heavily on usability of metrics, including if users can find metrics they are interested in and read them effectively.

  • Participants will be able to create recognition files for publications with no expressed or visible difficulty

  • Participants will be able to create and manage media associated with recognition files with no expressed or visible difficulty
  • Participants will be able to find related information to the tasks performed if they are confused or need help
  • Participants will be able to report the purpose of the system.
  • Participants will be able to create all the data needed for an issue of a simple publication within 10 minutes (assuming they have the media already)
Concerns
  1. Is the purpose of the system easily understandable?
  2. Can users successfully navigate through the application?
  3. Is the information logically organized and grouped for the user? Can they easily located the information they are looking for?
  4. Can the application be used without extra help?
  5. Are there tasks that users will want to perform that are not currently supported by the RocReadaR portal?
  6. Are users able to easily understand the account creation system?
  7. Are users confused by the difference between recognition and media images?
Usability Evaluation

A single usability evaluation will be run for each participant. Each session will consist of a set of tasks and an pre-test and post-test questionnaire for the participants to complete. The individual evaluations will take place in the following order:

  • Introduction of participant and evaluation monitor
    • Each participant will be personally greeted by the evaluation monitor to help them feel comfortable and relaxed.
  • Signing of Informed Consent form
    • The issue of confidentiality will be explained, and participants will be asked to sign statements indicating their agreement to volunteer in the evaluation
  • Pre-test questionnaire
    • Participants will be asked to fill out a short background questionnaire.
  • Orientation
    • The participants will receive a short, verbal scripted introduction and orientation to the evaluation. This will explain the purpose and objective of the evaluation and any additional information about what is expected of them. They will be assured that the product is the center of the evaluation and not themselves, and that they should perform in whatever manner is typical and comfortable for them. The participants will be informed that they are being observed.
  • Performance Evaluation
    • The performance evaluation consists of a series of tasks that are evaluated separately and sequentially. The individual participants complete the tasks (Tasks are defined in the Performance Evaluation Script) while being observed by the usability specialists.

  • Post-test questionnaire (Debriefing Session)
    • After all tasks are complete or the time expires, each participant will be debriefed by the evaluation administrator. The debriefing will include the following:

    • Completion of a brief post evaluation questionnaire (Post-test form) in which the
      participants share their opinions on the product’s usability, appearance of
      application pages, and general impressions of the product

    • Participant’s overall comments about his or her experience

    • Participant’s responses to probes from the evaluation monitor about specific errors or problems encountered during the evaluation

The debriefing session serves several functions. It allows the participants to say whatever they like, which is important if tasks are frustrating. It provides important information about each participant’s rationale for performing specific actions, and it allows the collection of subjective preference data about the application and its supporting documentation. After the debriefing session, the participants will be thanked for their efforts, and released.

Data Collection Methodology

 Data will be collected using subjective cues and a timer. Measure to be collected include the following, recorded on the Performance Evaluation Observation sheet:

  1. Mean time to complete each task
  2. Percentage of participants who finish each task successfully
  3. Number of cases where participants were not able to complete a task due to an error from which they could not recover
  4. The number of positive statements about the system
  5. The number of negative or critical statements about the system
  6. The number and types of errors, including
    1. Observations and comments - The observer notes when participants have difficulty, when an unusual behavior occurs, or when a cause of error becomes obvious
    2. Non-critical error - A participant makes a mistake but is able to recover during the task in the allotted time.
    3. Critical error - A participant makes a mistake and is unable to recover and complete the task on time. The participant may or may not realize a mistake has been made.
  7. The number of subjective opinions of the usability and aesthetics of the product expressed by the participants
  8. The number of times the participant asks for help
  9. The number of times the evaluation administrator assists the participant.

 

Version Date Comment
Current Version (v. 4) Mar 06, 2016 21:24 IAN SALITRYNSKI (RIT Student)
v. 3 Mar 06, 2016 20:27 IAN SALITRYNSKI (RIT Student)
v. 2 Feb 25, 2016 18:50 IAN SALITRYNSKI (RIT Student)
v. 1 Feb 25, 2016 17:30 IAN SALITRYNSKI (RIT Student)