Reflection #4 – Qual. and Quant. Metrics
This week’s assignment was all about the difference between qualitative and quantitative usability metrics. When do you use one over the other? What works better? What about subjective vs. objective measurements? And what are the benefits and limitations of each of the different methods for a given scenario? In Jakob Nielsen’s article on usability metrics, he mentions that qualitative measures provide more bang for the buck at understanding the low hanging usability fruit. But the investment in quantitative metrics can be beneficial in its use to track progress across iterations.
All of this was particularly interesting to me, since I am not mostly focusing on qualitative methods, but I certainly have a background in quantitative methods. When preparing for usability sessions, I talk with the stakeholders about what are the most important things for them to learn. If time on task, number of clicks, or any other purely quantitative metrics are really important to them, I will make sure that we have the ability to measure those things during the sessions. Otherwise, I may set up a list of success criteria that will be the metrics for success during the session. This can be anything from “gets to page X,” “is able to do Y,” or “notices button Z.” I then have a checklist for each person and mark if they passed the criteria, passed with assistance, or failed completely. This allows us to quantify some of the goals that the stakeholders may have for the given project.
I also love to employ both the System Usability Scale (SUS) and Net Promoter Score (NPS) questions at the end of every study. SUS is a standard set of 10 counterbalanced questions that give you a score around the total usability of the system. If we are doing iterative testing, this gives us a sense of the improvements to the usability of the product. It can also be used to compare usability across products. NPS is the “would you recommend this” question that my company currently tracks at a macro level. We ask it at the micro level to see if the individual experience we were testing is positive or negative in this way.
Overall, the ability to collect quantitative measures can certainly be valuable at making a case for the performance of a system and to track changes over iterations, but it takes a lot of resources that a team of two has a very hard time managing with a backlog of research requests!
Leave a ReplyWant to join the discussion?
Feel free to contribute!