Reflection #2 – Moderated vs. Unmoderated
For the first three weeks of this course, we are focusing on remote usability. This week, the objective was to learn more about the differences between moderated and unmoderated research tools and techniques. Again, Nate Bolt’s “Remote Research” is a particularly helpful resource in listing out the methodologies and services you could use to conduct both types of research. To learn more about one unmoderated research tool, we were tasked with conducting a Loop11 study on a website of our choice.
Loop11 is one of the many options that sprung up in the wake of rising research needs. It allows you to either embed a research survey into your site using JavaScript, or have a standalone survey that can be distributed via a link or recruiting services like Ethnio. The Loop11 platform gives you the option to ask survey questions in a variety of formats (multi choice, rating scale, matrix, etc.) as well as to define tasks that the user will have to complete by clicking through your site. On the back-end, Loop11 (when used correctly) does a majority of the analysis of data for you by creating charts, tables, graphs, user flows, and click heat maps based on all of the data your participants have contributed.
However, since Loop11 is very focused on quantitative measures, tasks need to be created with this in mind to make sure the pre-analyzed data ends up being useful. For each task a researcher will set a “success URL” which defines which page in the flow actually matches the successful completion of the task. Having a definite end point will allow Loop11 to measure time on task, pages viewed, and whether or not the user got to where they were supposed to. Opened ended tasks (such as “find a product that fits your needs”) could have hundreds of successful end points that will be hard to judge without some qualitative input. Loop11 also has the disadvantage of requiring sites to be loaded in frame through their servers for proper analysis; this slows sites to a crawl and prevents certain coded elements from working, which frustrates users who think the site you are testing is to blame. The solution for this is to inject JavaScript code into the site to allow for a better connection… but this is impossible to do when testing a competitor’s site. Plus, getting code into the dev queue can be a daunting task for an associate level researcher at a large company.
Overall, I am not impressed by Loop11’s offering. We considered using the tool at my current job, but realized that our analytics tools had much of the same functionality that we could tap into without all of the extra steps and processes. Besides, we have plenty of quantitative data about who clicks where, what flow users are taking most often, and where they drop off the site. But we need the why answer to figure out what is causing it to happen, and for that we use other tools like UserTesting.com
Next week I will use what I have gathered from Loop11 to present my findings… and I’ll share what I found here too!
Leave a Reply
Want to join the discussion?Feel free to contribute!