Perform user research and usability evaluations without leaving your desk
Numerous usability evaluation methods exist in which you do not need to perform in-person usability testing. When designing the user interface for a medical device, the US FDA and other authorities expect you to use these methods in addition to in-person usability evaluations.
Desk analysis methods are complementary to actual in-person usability evaluations. Often, manufacturers and human factors specialists apply these methods before engaging with actual representative users. Why recruit representative participants to use your device and tell you of use problems that you could have identified through desk-based usability evaluation activities such as task analysis, heuristic evaluation, HAZOP or by looking at databases and other sources of known use problems for existing devices?
Usability evaluation methods that you can perform from your desk
A selection of effective usability evaluation methods that you can perform from your desk include:
• Known Use Problems Analysis
• Task Analysis
• Heuristic Evaluation
• Expert Review
• Comparative Analysis
• Competitor Analysis
• Anthropometrical Analysis
• Remote Usability Test
Let’s look at a few of the desk-based methods that can be performed fast and without too many resources.
Known Problems Analysis
Authorities such as the US FDA expect you to know of use problems reported to their databases. If the authorities are aware of relevant known use problems with your device or a similar device, so should you. Furthermore, this analysis serves as valuable input for your risk analysis and your user interface design. The insights provide you with invaluable knowledge of the problems that exist with similar already marketed devices and enable you to mitigate or prevent these errors in your design.
Sources of information include competitor devices, post-market surveillance data, databases, medical device recalls, FDA safety communication, and articles. For instance, the MAUDE database (Manufacturer and User Facility Device Experience) enables you to search, categorize, and browse for problems classified as e.g. ‘use of device’, ‘use of incorrect settings’ and ‘user interface’ to name a few. Furthermore, MAUDE lets you see the harm related to reported use problems. The Known Use Problems analysis is relevant for e.g. the FDA Human Factors Evaluation report (section 4 – summary of known use problems) as well as for post-market surveillance.
Task analysis is a very effective and systematic technique for identifying potential use errors related to using your device. At Technolution we often combine the task analysis with a PCA analysis (Perception, Cognition, Action). See our article on the method here.
A hazard and operability study (HAZOP) can help you identify potential use problems in your task analysis, and it can be combined with your PCA analysis. The key advantage of HAZOP is that it uses guide words to help identify problems in a systematic way. Contact Morten Purup Andersen at firstname.lastname@example.org, if you would like our example sheet of the HAZOP method.
Usability Expert Review
An expert review is a usability-inspection method in which a reviewer examines a design to identify usability problems and potential use errors. The method is often confused with heuristic evaluation where usability specialists evaluate your design against a list of usability principles. An expert review is typically performed, as a minimum, by one usability specialist who inspects and reviews the user interface for usability problems. The method can be applied at all stages of the development process by inspecting a prototype or even a detailed concept description or a specification – detail levels that cannot be tested effectively with participants. Single device components or segmented user interface features can be evaluated which is advantageous compared to usability testing.
A heuristic evaluation can be considered a method where a group of usability specialists individually evaluate your design against a list of usability principles, i.e. heuristics. The heuristics can also be applied by a single usability specialist in a Usability Expert Review. Several lists of heuristics exist – many of them with overlapping heuristics. Heuristics used in this article are taken from Zheng et. Al:
1. Consistency and Standards – Users should not have to wonder whether different words, situations, or actions mean the same thing. Standards and conventions in product design should be followed.
2. Visibility of system state – Users should be informed about what is going on with the system through appropriate feedback and display of information.
3. Match between system and world – The image of the system perceived by users should match the model the users have about the system.
4. Minimalist. Any extraneous information is a distraction and a slow-down.
5. Minimize memory load – Users should not be required to memorize a lot of information to carry out tasks. Memory load reduces users’ capacity to carry out the main tasks.
6. Informative feedback – Users should be given prompt and informative feedback about their actions.
7. Flexibility and efficiency – Users always learn and users are always different. Give users the flexibility of creating customization and shortcuts to accelerate their performance.
8. Good error messages – The messages should be informative enough such that users can understand the nature of errors, learn from errors, and recover from errors.
9. Prevent errors – It is always better to design interfaces that prevent errors from happening in the first place.
10. Clear closure – Every task has a beginning and an end. Users should be clearly notified about the completion of a task.
11. Reversible actions – Users should be allowed to recover from errors. Reversible actions also encourage
12. Use the users’ language – The language should be always presented in a form understandable by the intended users.
13. Users in control – Do not give users that impression that they are controlled by the systems.
14. Help and documentation – Always provide help when needed.
Medical device design should account for the many different sizes of people using your device. Using anthropometrical data in your user interface design can ensure that your device fits as many of your users as possible. Anthropometrical data is measurements of people based on existing empirical statistical research databases and can provide you with design input to the physical layout, shape and size of the device and can be performed without involving test participants.
Remote usability test
In a remote usability test, a test participant performs tasks on a screen-based user interface. Many remote usability test tools allow you to record the participant’s screen and voice while he or she is thinking aloud. This allows us to observe how the user use your application or device. For testing of physical devices, we record the user using your device by using a webcam. Remote testing is an effective method to evaluate your warnings and instructions for clarity and effectiveness.
Figure 1 – Test example: Content and layout of the Panodol PIL (Patient Information Leaflet) downloaded from https://www.medicines.org.uk/emc/product/5916/pil
Remote usability tests can be either moderated or unmoderated:
• A moderated remote usability test allows real-time communication between the test participant and the moderator
• An unmoderated remote usability test does not have real-time communication between the test participant and the moderator
There are advantages and challenges of each variant. Due to the unmoderated test method not allowing real-time communication between participant and moderator, the method is not recommended for all purposes. For instance, the method is not recommended for summative testing (human factors validation testing) as the unmoderated usability test does not allow you to follow up on the root cause of the user’s action or use problems. But on the other hand, unmoderated usability tests allow several participants to simultaneously evaluate your user interface individually in their home environment. The test is not location-dependent and is therefore less time-consuming for the test participants. This is a great advantage for you during your formative evaluations.
Typically, the test data include a screen recording dubbed with the participants voice ‘thinking aloud’ while performing the specified tasks. By having the participant talk out loud while performing the tasks, you gain insight into the users’ thoughts and experience while using your device user interface. This data allows you to analyze the participant’s task performance and subjective comments whenever it suits you allowing for greater flexibility.
In the table below, in-person usability testing is compared to remote testing on a few central aspects.
Remote usability testing also presents new challenges such as maintaining confidentiality of your design, data privacy, and potential technical challenges of setting up the recording of the participant’s screen and voice. These are all challenges that you should be aware of but can overcome.
In this article we have reviewed a few of the many options for evaluating your user interface directly from your desk. There are more methods, opportunities and challenges than the ones that our usability specialists have described above.
Get in touch with us to discuss your specific evaluation goals and how we can help your usability evaluation efforts by applying these desk-based methods.
Morten Purup Andersen
Senior Development Engineer, HFE Specialist