Test script (5% of Assignment mark)
Explain to the reader why a formal script (read to participants) was required.
You can include the script in the body of the report if it is short, but this would usually be included in an appendix. There would typically be an introductory section to the script, as well as specific instructions associated with each task, and a script for any concluding comments. Task-specific sections of the script can be included with the detailed description of each task (see below). The script should:
explain purpose of the test.
indicate what will be expected of participants.
alert participants of your intentions to record (screen/audio/video) with their permission.
put the participants at ease with the process.
explain how you will deal with privacy of information etc.
Task design (10% of Assignment mark)
Most tests will need to be broken up into several distinct tasks. These could be anything from locating a piece of information, completing an online purchase, being able to navigate through a required sequence etc. In some cases it can be useful to time how long it takes for participants to achieve these objectives – this can be an unbiased measure that can be used for comparison after any changes made in response to the testing.
The purpose and objective of each test task should be clearly identified.
Give an overview of the tasks you have chosen to test, and rationale for why they were chosen (remember tasks should be chosen to exercise core and typical functionality). Make sure the reader understands the objective of each task.
For each task:
State (briefly) the purpose / objective(s) of the task
Describe the scenario you have created for the test participants and any assumptions you have asked them (or expected them) to make in this scenario briefly outline the steps required to complete the task, and estimated time required. Summarise the instructions given to participants and what other info they were given. Include the section of formal test script read to participants during the task.
Include details of any task cards that were used.
Make sure the purpose of task cards are explained to the reader of the report. The actual task card details are probably best included in an appendix – but summarise the intent of each task card in the main body of the report (You don’t need to submit the physical cards, but the contents and design of the cards should be clearly set out).
Comment on the significance of the order of the tasks. Are you randomising task order? If so explain why.
Indicate estimated time required to complete each task, and therefore the entire test. It is important that you have a sound basis for estimating how long participants are going to take to complete the test. You should have conducted a pilot test to help make this estimate.
Also estimate the turn-around time between tests so that the time required for multiple test subjects can be estimated
It is common to use questionnaires to determine details such as age, gender, familiarity with computers / internet, web browsing platform percentages (% mobile / % tablet / % desktop), social media use/preferences etc. Include details of questionnaires given to participants in an appendix. These typically include:
a questionnaire to obtain demographic and other information about participants (ie degree of familiarity with the web, what sort of sites visited / games played / purchases made – how often etc). Include a brief rationale for why this information is useful.
a post-test questionnaire to gather extra info from participants about their experience (Eg: did they like it? Would they use it in real life? What would they change about it etc).
any questions asked during tests to generate a better understanding of your participants or the client app including a participant’s first impression of an app/site/game being tested before administering a task.
Try to avoid asking participants to classify themselves using subjective criteria (eg “would you regard your internet skills as beginner, intermediate or advanced”). Instead try to use more objective, quantitative measures such as “indicate the number of hours per week on average that you browse the internet” (break this down according to work / personal / mobile / tablet / desktop). Another useful question is something like: “how regularly do you make purchases / use services such as this in your typical life (give options)”.
Don’t forget to include consent forms and other instruments you may have used, along with a scan of completed questionnaires in an appendix.
• Results (10% of Assignment mark)
– summarise the major issues that were discovered.
Organise issues under headings that describe related themes (If you have identified a number of issues, Affinity Diagraming may help with this process. Include any affinity diagram(s) in an appendix)
use annotated screenshots to make clear what you are describing
point out the impact that these issues have on the usability of the interactive. Rate these issues by severity. Emphasise any “dealbreaker” issues.
if applicable, describe how any audio/video evidence corroborates your observations – especially regarding the impact of these issues on the test participants, their state of mind when things don’t go as expected etc. You might include a link to a YouTube video or similar (if you do this make sure videos are private and participants are de-identified.
include observations from all group members (raw data) in an appendix.
(Please tag each group member’s observations so we know who was responsible).
Consider including one or more photographs of the testing process to make it more tangible to the client.
Recommendations (10% of Assignment mark)
suggest how any usability issues that have been discovered might be rectified. You don’t need to go into too much written detail, but preferably use annotated screenshots to describe how the problem might be fixed (eg in a before-after type diagram). The annotated screenshots (showing your suggested changes) can become the basis of the paper prototype you will have utilised to test the proposed changes (see below).
If you do this it is also worthwhile summarising all of the recommendations in a separate recommendations section.
• Validating your Recommendations (20% of Assignment mark)
You will have created a paper prototype The purpose of the prototype is to represent your recommendations visually. It could be as simple as a wireframe diagram of the relevant sections of the site.
By running a short usability test on the prototype you will hopefully have obtained initial validation of your recommendations.
Briefly explain how insights gleaned from the usability testing of your client’s interactive informed your thinking. You should also be able to draw on evidence from the earlier heuristic evaluation, and card-sort activities.
For example: - changing the information architecture based on card sort results.
-redesign of a screen (eg changing menus, moving controls).
The important thing here is that you cite the evidence backing up your change recommendations, and that the prototype incorporating the changes (hopefully) demonstrated some improvement when tested.
If you wish to make any suggestions based on your own observations, you should be able to draw on the results of your heuristic evaluation (A1 part 1).
Include a summary of the paper prototyping exercise in an appendix (see next page for appendix suggested detail). Incorporate one or more photographs of the testing session to make it more tangible to the client.
Appendices for part 2:
Consider including the following in appendices of the report if not included (or if only summarised) in the main body: (Items highlighted are the ones we expect to see as a minimum).
Don’t just drop items into the appendices – make sure each item is accompanied by some explanation: - what is it’s purpose – why it is included etc.
•Task design templates. If you used Synder’s template to design your task(s), you can include it here, along with a line or two explaining it.
•Personae used to identify target audience for test (recycle from A1 part 1) – [if not in body].
•Test script. Include the complete test script in an appendix if not part of main body of report.
•Task card details. Contents and layout of the task cards you used.
•Number of participants planned-for / time budgeted for testing
Estimate how much time to budget for the testing based on the number of test subjects chosen and the time allowed for each test (including turnaround time ie time between tests). Have some contingency up your sleeve in case things don’t go fully to plan.
•Location and date for testing. Details of when / where testing was carried out.
•Questionnaires and other instruments. Include details of any “test instruments” used - e.g. pre and/or post test questionnaires. Include scanned copies of completed questionnaires. Make sure there is no information identifying participants visible.
•Raw results / observations. Include the list of observations made by your team, and any techniques you used to process them (ie affinity diagrams to chunk the data).
Include information of where to find links to audiovisual material (eg YouTube link) if appropriate.
•Consent form for participants. A consent form to ensure that participants understand what will happen during the session and be fully aware of the nature of their involvement. The form allows participants to consent to participate and to being recorded (either audio/video/screen capture or some combination of these - if this is happening. The form should also give them the option to opt-out of recording. It should also indicate what procedures you have in place to ensure the privacy of participants. Include scans of signed copies of the forms.
•Observer guidelines. Since testing can often be stressful for participants it is important that observers behave in a consistent, appropriate fashion and ethical fashion.
Having some guidelines for observers can help ensure this happens.
•Prepare receipts to be given to participants to acknowledge any cash (or chocolates) paid in return for participation.
•Checklist to use for the test Compile a checklist itemising everything you need to prepare and do for each test – both before, during and after each test. Eg: clearing the browser cache, list of materials required, reminder to silence phones, provide water / snacks etc.
•Paper prototyping exercise. Summarise the paper prototyping exercise you conducted: o Include a paragraph or two explaining the purpose of the exercise.
Include images of the paper prototyping materials you prepared, list any the applications you used to create them, and note the group member(s) responsible for creating them.
Summarise the test methodology (no need to be as formal as the main usability report). Just explain the process you followed in a few bullet points.
Summarise results observed and conclusions drawn.
If you updated task analysis, scripts, task cards, questionnaires and consent form, include these updated items.