As technology advances per Moore’s Law, the number of devices that people use will grow which will introduce more variability in screen resolution/size, processing power, and more. The problem becomes as mobile and tablet users increase, these people will expect to do more on these (sometimes) underpowered devices, including access your website. I have been lucky enough to help spearhead mobile research at Vistaprint as they set on a project to redesign their mobile experience. This post will cover everything from requirements building to conducting actual testing on an in-production site in our lab as we worked on this project.
After a week away for spring break, it’s time again to reflect on this week’s assignment. Over the past three weeks I have launched, collected responses for, and analyzed a remote unmoderated user test through Loop11. The culmination of this was creating a final findings report with a presentation that included color commentary on the results gathered. Last week I talked a lot about the limitations of Loop11, and I made sure that I included these details in a slide of my report, since some of the metrics (notably, time on task) was affected by the slowness of the platform. For context, the test that I ran was focused on DSW.com and three common tasks you might need to complete; finding a shoe to buy, learning about the return and exchange policy, and locating a physical store location.
For the first three weeks of this course, we are focusing on remote usability. This week, the objective was to learn more about the differences between moderated and unmoderated research tools and techniques. Again, Nate Bolt’s “Remote Research” is a particularly helpful resource in listing out the methodologies and services you could use to conduct both types of research. To learn more about one unmoderated research tool, we were tasked with conducting a Loop11 study on a website of our choice.
[Note: After 7 weeks of Usability I at Kent State, it’s time for Usability II. I’m starting the reflection count over again. Hope you enjoy!]
For the first week of Usability II, we learned all about the pros and cons of remote usability testing vs. in-person sessions. Nate Bolt’s Remote Research gives a great overview of the practice, as well as the details to effectively run a remote study. I have quite a bit of experience determining what methodology to use, and advocating for which would be best for a given study; from my very first experience doing research in college to my current job, I have been convincing stakeholders to use one or both methods for almost 5 years.
This, the final week of class, was all about analyzing data and reporting out findings to the team of stakeholders. To me, this is the most arduous and complicated part of the usability process. If I had the choice, I’d pull a Steve Krug and not write a report. Instead, I’d make sure all of the important business owners were involved in the research process from the beginning watching all of the sessions to see the insights for themselves. Then, I’d walk these people through the site and mention the major findings and suggestions for improvements, followed up with an email highlighting these talking points. Alas, this does not work unless you’ve written a book.
This week, I was given a script and told to moderate a usability study. This is my favorite part of the whole research process: actually getting to talk to a user and watch them use the product. It’s the culmination of all of the prework and meetings, and because of that I make sure usability is a true event for me, my team, and the team I am working with. The assignment didn’t require that I write a script, but I did need to moderate and record a sessions with an actual user investigating the Papa John’s website and ordering experience.
This week’s assignment was all about the difference between qualitative and quantitative usability metrics. When do you use one over the other? What works better? What about subjective vs. objective measurements? And what are the benefits and limitations of each of the different methods for a given scenario? In Jakob Nielsen’s article on usability metrics, he mentions that qualitative measures provide more bang for the buck at understanding the low hanging usability fruit. But the investment in quantitative metrics can be beneficial in its use to track progress across iterations.
This week our assignment was to create a screener script with qualification questions, and a moderator’s guide with tasks and scenarios. It was perfect for me, because I was working on the same sort of things for a project I am doing at work. The interesting thing about this assignment was we worked in groups to source our questions and tasks.
This week our assignment was to, using the medium of our choice, convince the CEO and CTO of a fictional pizza delivery company what type of usability method they should use as they build a new website before Super Bowl Sunday. Basically, should Papa John and Dom Inos decide to do multiple tests as they develop their site and before they release it to the public? Or, should they opt to do a study at the end of development/after the site has been released to the world? Any testing is better than no testing, but… I would prefer the formative approach.
This week, the first module of Usability I focused on what usability is at it’s core. This included sussing out what makes it different from other types of research, and why research of this type can be done quickly and with fewer people. As soon as I logged in to Blackboard to start working on the assignment of writing a business proposal justifying the number of usability participants, I saw my favorite graph and had to laugh.