Reflection #1 – Remote vs. In-Person Usability
[Note: After 7 weeks of Usability I at Kent State, it’s time for Usability II. I’m starting the reflection count over again. Hope you enjoy!]
For the first week of Usability II, we learned all about the pros and cons of remote usability testing vs. in-person sessions. Nate Bolt’s Remote Research gives a great overview of the practice, as well as the details to effectively run a remote study. I have quite a bit of experience determining what methodology to use, and advocating for which would be best for a given study; from my very first experience doing research in college to my current job, I have been convincing stakeholders to use one or both methods for almost 5 years.
In my current position, we typically do remote research and have perfected it down to a science. From the WebEx connection set up, to the Camtasia recording, and walking a participant through connecting with us, everything has been rehearsed and practiced so that we can get working on the tasks at hand. We test almost everything we can remotely; prototypes hosted externally, in-dev items (by having participants control our screen), and even visual comp screenshots can be evaluated remotely. The best part is that I can moderate from anywhere, especially when Boston has a record-breaking snowfall season.
When I joined the company, very little if any in-person studies were being completed. As a plucky young associate, I created a case to do more studies with participants coming into the office so that I could interact with them directly. When we started this practice, we used in-person studies for research on projects that were still in a development environment, as well as for testing on our mobile website (I was a part of the team that helped launch the new mobile site, and we did numerous rounds of testing on prototypes and the finished version using a mobile sled that was made at home by my wonderful manager, Kirk). These sessions went well, but we were finding that we were running out of people to use (when we pulled a list we had a lot of repeat names), and weather was problematic during the winter. So now, we do a balance of the different methodologies depending on what we are trying to learn.
Over the next few weeks I will try to share a bit more about how I conduct remote moderated and unmoderated usability studies, as well as my experience testing on smartphones/tablets and eye tracking a website.
Leave a ReplyWant to join the discussion?
Feel free to contribute!