In this interview, Carine Lallemand talks to us about the history of expert evaluations as well as the types of evaluations that have been lead over the years. Carine is a psychological researcher and ergonomist of IHM at the University of Luxembourg. She is the author of the book Méthodes de design UX (Eyrolles, 2015). You can follow her on her blog, UX Mind, or on Twitter. Thanks, Carine!
In this series, we interview UX professionals about the importance of usability, ergonomic, and expert evaluations.
Transcription de l'interview
The usability expert review is a method invented in the 80’s that were initially called heuristic evaluations and that were a bit rigid in style because people began saying that you need to follow strict guidelines and stick to a framework to do evaluations. I think that with UX, we’ve arrived at something a bit more relaxed, more open, less rigid, more flexible where people will base their work on guidelines to evaluate site, services, and products, but also use their own expertise which is so important since user experience is much bigger than just usability. I also think it’s a method that works very well in combination with other methods. Although it shouldn’t prevent people from involving real users in user tests after the fact, the expert evaluation has the advantage of removing known recurring problems so that the user tests are not skewed by issues we could have corrected ahead of time. And it’s interesting that, in these studies I’ve done, the expert evaluation can be backed by the flexible tools that allow them to not base everything on a single set of criteria, but to use a multitude of them or personalized criteria and to then be able to do their evaluation in a customized way.
The main change regarding user experience is that all of the sudden we had positive criteria. We’re no longer just looking for problems, but we’re now also looking for what triggers positive experiences and emotions within a system. It’s important that, in future sets of expert evaluation criteria, we have an integration of positive benchmarks and not just negative ones or things that are missing. When we do expert evaluations, we rarely think that something that’s missing is a user experience criterion. For example, if you use a camera without a touch screen – in comparison to everything in 2016 that has a touch screen – the absence of the touch screen will create a poor experience. It’s something that experts are less used to doing, looking for missing pieces. Generally, we’re looking for problems, we note them, and the evaluation is about fixing these issues. I like to see expert evaluations that not only fix problems, but identify positive experience triggers to reinforce them.