Reflection #6 – Eye Tracking

As the end of the course approaches, we focus on eyetracking usability. This is a methodology I’ve always thought was really interesting to try to use for a couple of reasons:

  • When you watch enough usability sessions, it’s easy to wonder if a user is seeing what is on the page
  • The output of gaze plots and heat maps can be interesting to analyze and use for presentations

However, after reading about how to conduct a study like this (“Eyetracking Web Usability” by Nielsen and Pernice is a good resource in this regard) it is clear that costs can be high for equipment, set-up, participant incentives and time to analyze the data is greater than most other usability methodologies. Additionally, this methodology is prone to errors since participants can’t move their head and the equipment can have a hard time calibrating to a participant’s gaze. Eyetracking is more valuable when standard usability testing is done enough so that a majority of issues are uncovered and fixed; at that point, the eyetracking data can be a nice supplement to find other usability concerns.

At my first job, we had a Tobii eye tracking machine but it was only used for one or two projects in my time there. The system worked well, but clients did not feel that the methodology was necessary to learn about the interface. Even at Vistaprint, we have discussed the idea of using eyetracking for various projects. The interesting part is that we can usually determine if a participant sees something in the interface or not by designing tasks that focus on specific functionality of parts of the site. At some points we ask a participant if they saw a particular item, and when they say no we tell them to look for it on the page (sometimes it’s right in front of them or over where their mouse is) and they still don’t see it! With further prodding we find out that the biggest issues lie with relevancy and user goals. If the part of the interface we are testing isn’t compatible with a user’s goals, or it isn’t relevant to what they are trying to do, they will not notice it or ignore it completely. Sometimes the issue can be that the user is so focused on their task that they don’t see anything else; the idea of selective attention applies to the web, but it is also demonstrated in this fun video involving some basketball players.

Eyetracking may help determine if a user actually didn’t see these interface elements on the site, or if they did gaze at them but did not fixate on the elements long enough to register in their brain. But first, it’s important to determine if the user’s goals and expectations align with the functions of the site and address those problems first. For the final week of Usability II, I’ll write a brief review of “Selling Usability: User Experience Infiltration Tactics” in terms of getting people to buy in to user experience and research.

Reflection #5 – More on mobile and gestures

Last week I talked a lot about how to conduct a usability study on mobile devices, since the assignment I was working on was to create a plan for how a lab could be set up to do one of these tests. Well, the assignment is done and I think it’s one of the better things I have produced in terms of practicality and usefulness to the average person who is looking to do a study like this. If you’d like to check out the “Mobile User Experience Research Project Plan: Details to set up a successful mobile usability test” document, then click the link (warning: it’s a PDF).

The rest of this week was focused on a discussion of gestures, which is one of those areas of discussion that brings out differences in opinion between technology types, designers, and those working on the user experience. One thing to consider is that we need to understand whether or not people know what these gestures are; I’d posit that we are at a place in the evolution of mobile device usage where people are being exposed to these gestures constantly and are mostly familiar with the core gestures to use their tablets and phones. This includes swiping and tapping, as well as the pinch/spread move typically associated with zooming in or out on devices. Once we find out if people know what these gestures are (again, I think we can assume this for the base gestures), we can see if they use them on whatever we are testing and what they expect them to do. This is where observation is key for both developers, who are implementing these gesture actions, and designers, who need to artfully create something that suggests interactivity with the gesture in mind.

The issue is that the technology is ever-changing and new gestures are being developed all the time. How do you introduce a new gesture into the market that will gain acceptance and widespread use? Can you accurately teach users to modify their behavior to do a gesture for a given action, and do you need to provide alternative ways of performing the action? Then, the challenge grows when you consider new input methods and the gestures associated with it like the interfaces in Minority Report, as well as the Myo armband mouse and the Leap Motion controller. But for now, these newer input methods suffer a fatal human factors flaw; I don’t know many people who can swing around their arms for hours on end without getting tired.

Reflection #4 – Mobile Research Case Study

As technology advances per Moore’s Law, the number of devices that people use will grow which will introduce more variability in screen resolution/size, processing power, and more. The problem becomes as mobile and tablet users increase, these people will expect to do more on these (sometimes) underpowered devices, including access your website. I have been lucky enough to help spearhead mobile research at Vistaprint as they set on a project to redesign their mobile experience. This post will cover everything from requirements building to conducting actual testing on an in-production site in our lab as we worked on this project.

Read more

Reflection #3 – More on Remote Unmoderated Testing

After a week away for spring break, it’s time again to reflect on this week’s assignment. Over the past three weeks I have launched, collected responses for, and analyzed a remote unmoderated user test through Loop11. The culmination of this was creating a final findings report with a presentation that included color commentary on the results gathered. Last week I talked a lot about the limitations of Loop11, and I made sure that I included these details in a slide of my report, since some of the metrics (notably, time on task) was affected by the slowness of the platform. For context, the test that I ran was focused on DSW.com and three common tasks you might need to complete; finding a shoe to buy, learning about the return and exchange policy, and locating a physical store location.

Read more