Create an app that guides users to take a full facial scan and analyze if the resulting image was sufficiently good to be used in the remote care product.
I created all the screens for the app, tested them with users, and received positive feedback. I also completed the process of handoff with an external international team, who successfully developed the app.
Before starting the design of the app, I wanted to understand the full picture and clarify the tasks I needed to do. This involved:
I talked to developers to understand the algorithm constraints and with clinical personnel to understand the workflow constraints.
I learned about their daily tasks and how the device fits into their work. My goal was to discover their needs, their concerns, and their motivation.
With the constraints and user findings, I defined with the team the basic requirements for the product and performed the task analysis.
The user research and constraint analysis showed that the two main challenges were to accurately align the patient and to comprehensibly describe what a bad image looked like.
I decided to use a guided approach for each of the challenging areas identified. For the face alignment, I chose to test visual guides to show where the face was pointing and where it should move. For the analysis, I devised a guided questionnaire with examples to verify each parameter.
After testing, I created the full workflow map to dive into each necessary page and started their designs.
With each iteration and test, I gathered important information that guided my subsequent work. My prototypes became more usable, and started following the Zeiss UI guidelines.
I created user stories for developers and we discussed them in grooming meetings. I had to weight in the app priorities, address corner cases, and resolve problems that came up.
In the end, we had a fully defined app that addressed the challenges and met the needs of all the stakeholders involved.