Reflection #3 – More on Remote Unmoderated Testing

After a week away for spring break, it’s time again to reflect on this week’s assignment. Over the past three weeks I have launched, collected responses for, and analyzed a remote unmoderated user test through Loop11. The culmination of this was creating a final findings report with a presentation that included color commentary on the results gathered. Last week I talked a lot about the limitations of Loop11, and I made sure that I included these details in a slide of my report, since some of the metrics (notably, time on task) was affected by the slowness of the platform. For context, the test that I ran was focused on DSW.com and three common tasks you might need to complete; finding a shoe to buy, learning about the return and exchange policy, and locating a physical store location.

The site performed as expected in so much that with the 24 participants that completed the test, the three tasks had an average success rate of 89%. In addition to the task data, further quantitative desirability data meant to measure satisfaction and confidence in the site was overwhelmingly positive. Outside of that, the data provided by the Loop11 sessions did not provide the insight that might have been gained from a more qualitative exercise, or a more well-rounded/extensive quantitative tool.

All of this commentary might make you think that I am not a fan of unmoderated testing, when in reality this is far from the truth. One of the materials I went through for this week was Kyle Soucy’s presentation “Umoderated Remote Usability Testing: Good or Evil?“; her presentation on the benefits and drawbacks of such methodology are spot on. I feel like these tools provide a certain flexibility for teams that are overextended or for teams that do not have the funds to do full lab testing, but not everything can be tested properly without a bit of moderation. Furthermore, each of these tools provides something different… and I feel more comfortable going through qualitative data when the site I am testing already has extensive analytics tools in play.

The rest of this semester will focus on selling usability to an organization, mobile usability (and gestures used by mobile users), as well as eye-tracking and how it can play a part in determining if a system is usable.

 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published.