Posts

Reflection: Week #7 – Test Plans & Wrap-Up

This is my final post in the series of weekly reflections for my Interaction Design course.

For this last week in IxD at Kent State, we were asked to come up with a research plan for testing the prototype that we had created. Luckily for me, this is in my wheelhouse and I was excited to create something that would work for this project. To be honest, I am surprised I hadn’t written anything about this for my Usability I or Usability II courses, but now I get to share one of my favorite things to use for writing up a research plan.

Read more

Reflection #6 – Eye Tracking

As the end of the course approaches, we focus on eyetracking usability. This is a methodology I’ve always thought was really interesting to try to use for a couple of reasons:

  • When you watch enough usability sessions, it’s easy to wonder if a user is seeing what is on the page
  • The output of gaze plots and heat maps can be interesting to analyze and use for presentations

However, after reading about how to conduct a study like this (“Eyetracking Web Usability” by Nielsen and Pernice is a good resource in this regard) it is clear that costs can be high for equipment, set-up, participant incentives and time to analyze the data is greater than most other usability methodologies. Additionally, this methodology is prone to errors since participants can’t move their head and the equipment can have a hard time calibrating to a participant’s gaze. Eyetracking is more valuable when standard usability testing is done enough so that a majority of issues are uncovered and fixed; at that point, the eyetracking data can be a nice supplement to find other usability concerns.

At my first job, we had a Tobii eye tracking machine but it was only used for one or two projects in my time there. The system worked well, but clients did not feel that the methodology was necessary to learn about the interface. Even at Vistaprint, we have discussed the idea of using eyetracking for various projects. The interesting part is that we can usually determine if a participant sees something in the interface or not by designing tasks that focus on specific functionality of parts of the site. At some points we ask a participant if they saw a particular item, and when they say no we tell them to look for it on the page (sometimes it’s right in front of them or over where their mouse is) and they still don’t see it! With further prodding we find out that the biggest issues lie with relevancy and user goals. If the part of the interface we are testing isn’t compatible with a user’s goals, or it isn’t relevant to what they are trying to do, they will not notice it or ignore it completely. Sometimes the issue can be that the user is so focused on their task that they don’t see anything else; the idea of selective attention applies to the web, but it is also demonstrated in this fun video involving some basketball players.

Eyetracking may help determine if a user actually didn’t see these interface elements on the site, or if they did gaze at them but did not fixate on the elements long enough to register in their brain. But first, it’s important to determine if the user’s goals and expectations align with the functions of the site and address those problems first. For the final week of Usability II, I’ll write a brief review of “Selling Usability: User Experience Infiltration Tactics” in terms of getting people to buy in to user experience and research.

Reflection #5 – More on mobile and gestures

Last week I talked a lot about how to conduct a usability study on mobile devices, since the assignment I was working on was to create a plan for how a lab could be set up to do one of these tests. Well, the assignment is done and I think it’s one of the better things I have produced in terms of practicality and usefulness to the average person who is looking to do a study like this. If you’d like to check out the “Mobile User Experience Research Project Plan: Details to set up a successful mobile usability test” document, then click the link (warning: it’s a PDF).

The rest of this week was focused on a discussion of gestures, which is one of those areas of discussion that brings out differences in opinion between technology types, designers, and those working on the user experience. One thing to consider is that we need to understand whether or not people know what these gestures are; I’d posit that we are at a place in the evolution of mobile device usage where people are being exposed to these gestures constantly and are mostly familiar with the core gestures to use their tablets and phones. This includes swiping and tapping, as well as the pinch/spread move typically associated with zooming in or out on devices. Once we find out if people know what these gestures are (again, I think we can assume this for the base gestures), we can see if they use them on whatever we are testing and what they expect them to do. This is where observation is key for both developers, who are implementing these gesture actions, and designers, who need to artfully create something that suggests interactivity with the gesture in mind.

The issue is that the technology is ever-changing and new gestures are being developed all the time. How do you introduce a new gesture into the market that will gain acceptance and widespread use? Can you accurately teach users to modify their behavior to do a gesture for a given action, and do you need to provide alternative ways of performing the action? Then, the challenge grows when you consider new input methods and the gestures associated with it like the interfaces in Minority Report, as well as the Myo armband mouse and the Leap Motion controller. But for now, these newer input methods suffer a fatal human factors flaw; I don’t know many people who can swing around their arms for hours on end without getting tired.

Reflection #4 – Mobile Research Case Study

As technology advances per Moore’s Law, the number of devices that people use will grow which will introduce more variability in screen resolution/size, processing power, and more. The problem becomes as mobile and tablet users increase, these people will expect to do more on these (sometimes) underpowered devices, including access your website. I have been lucky enough to help spearhead mobile research at Vistaprint as they set on a project to redesign their mobile experience. This post will cover everything from requirements building to conducting actual testing on an in-production site in our lab as we worked on this project.

Read more

Reflection #2 – Moderated vs. Unmoderated

For the first three weeks of this course, we are focusing on remote usability. This week, the objective was to learn more about the differences between moderated and unmoderated research tools and techniques. Again, Nate Bolt’s “Remote Research” is a particularly helpful resource in listing out the methodologies and services you could use to conduct both types of research. To learn more about one unmoderated research tool, we were tasked with conducting a Loop11 study on a website of our choice.

Read more

Reflection #1 – Remote vs. In-Person Usability

[Note: After 7 weeks of Usability I at Kent State, it’s time for Usability II. I’m starting the reflection count over again. Hope you enjoy!]

For the first week of Usability II, we learned all about the pros and cons of remote usability testing vs. in-person sessions. Nate Bolt’s Remote Research gives a great overview of the practice, as well as the details to effectively run a remote study. I have quite a bit of experience determining what methodology to use, and advocating for which would be best for a given study; from my very first experience doing research in college to my current job, I have been convincing stakeholders to use one or both methods for almost 5 years.

Read more