Usability Testing of Apps and Interfaces, 5 key principles
The mHealth global market is currently estimated to be worth $27B and is expected to reach over $111B by the year 2025! Therefore, the number of medical apps and smart-connect devices being released to market will increase vastly over the coming years as healthcare gradually transitions toward a more precision-based model. Personalised medicine will become an increasingly large part of patient care as trends such as EMR, remote monitoring and treatment adherence play larger roles in their treatment.
From our past experience of testing user interfaces, app development and human factors consultancy, in this blog post we will share some tips that have helped us.
- Ensure that the app you are testing is as close to finished article as possible, problems such as missing links between screens can cause unnecessary easily prevented problems during formal usability studies, which can detract from uncovering useful insights to inform the design. So it is always best to have all of those issues solved before progressing to formally testing the interface with groups participants.
- While still in the initial design stage, involve as many end users as possible to get their opinions. Informal wireframing and paper prototypes are perfect, anything that can quickly demonstrate ideas and work through a wide range of concepts.
- Focus on the most difficult tasks that may be new or specific to that interface. Getting 15 people to test a new interface then to only have them test it by only asking them to set up an account will not likely lead to uncovering many good insights, as simple tasks such as that are an increasingly common everyday task that users will often be able to complete instinctively.
- If holding rough and fast guerrilla style usability sessions, asking participants to complete the tasks that are difficult, unique and specific to that app or interface will provide much richer feedback that will be much more useful in improving the design.
- Use a combination of closed and open-ended questions to get a good mix of feedback. Closed questions are good for knowing if someone can easily complete a task, but open-ended questions can be useful to understand if the feature they testing are is something they would use in the real world and if not, what would they do differently?