For the first three weeks of this course, we are focusing on remote usability. This week, the objective was to learn more about the differences between moderated and unmoderated research tools and techniques. Again, Nate Bolt’s “Remote Research” is a particularly helpful resource in listing out the methodologies and services you could use to conduct both types of research. To learn more about one unmoderated research tool, we were tasked with conducting a Loop11 study on a website of our choice.
[Note: After 7 weeks of Usability I at Kent State, it’s time for Usability II. I’m starting the reflection count over again. Hope you enjoy!]
For the first week of Usability II, we learned all about the pros and cons of remote usability testing vs. in-person sessions. Nate Bolt’s Remote Research gives a great overview of the practice, as well as the details to effectively run a remote study. I have quite a bit of experience determining what methodology to use, and advocating for which would be best for a given study; from my very first experience doing research in college to my current job, I have been convincing stakeholders to use one or both methods for almost 5 years.
This, the final week of class, was all about analyzing data and reporting out findings to the team of stakeholders. To me, this is the most arduous and complicated part of the usability process. If I had the choice, I’d pull a Steve Krug and not write a report. Instead, I’d make sure all of the important business owners were involved in the research process from the beginning watching all of the sessions to see the insights for themselves. Then, I’d walk these people through the site and mention the major findings and suggestions for improvements, followed up with an email highlighting these talking points. Alas, this does not work unless you’ve written a book.