Tag Archive for: usability

Reflection: Week #7 – Test Plans & Wrap-Up

This is my final post in the series of weekly reflections for my Interaction Design course.

For this last week in IxD at Kent State, we were asked to come up with a research plan for testing the prototype that we had created. Luckily for me, this is in my wheelhouse and I was excited to create something that would work for this project. To be honest, I am surprised I hadn’t written anything about this for my Usability I or Usability II courses, but now I get to share one of my favorite things to use for writing up a research plan.

Read more

Reflection #7 – Selling Usability

Note: This is the last post I will be writing for Usability II at KSU. I’m not sure if there will be any blogging for future classes, but this experience has certainly got me in a writing mood…

One of the biggest problems the UX industry faces is buy-in. You’d think that after over 25 years of widespread use (and I know the idea of UX has been around for longer, but really it’s proliferation didn’t begin until workplace computing took off) that the benefits and advantages of thinking about your customers would be apparent by now. But depending on where you are working and who you are working with, you could face resistance, ignorance, or downright hostility to a simple idea. And so just having passion for my work is not enough, because the passion can come off as arrogance. That’s why a finer touch is required, and it’s something that I have struggled with for a while even though I worked in places that “embraced the customer.” You will always have to convince someone, you will always need to evangelize, and you must always consider how you approach things.

The first thing to consider is that you are talking to people who do not necessarily have any knowledge of UX… but they know business and money. Now I don’t have a business background, but I know that they work I have done has had incremental impact on revenue, customer retention, and long-term value. Presenting these facts, these outputs can really have an effect on important people who are looking at the bottom line. They don’t want to know what my process is (yet), they just want to know why it’s good for the company and how it can be done with little to no investment.

And sometimes it’s more of a political game. Being blinded by passion and the need to debate my position doesn’t help my cause when you are grappling with egos and diplomatic struggles for budget and glory. So sometimes it’s best to cut your losses and just listen. Seriously, just stop what you normally do in that situation and listen to the person who disagrees to the point where you can completely understand what they are saying and, in due time, turn it back around on them to WOW them with the results you can bring. I’ll leave you with this excerpt from the end of a chapter in “Selling Usability: User Experience Infiltration Tactics” by John S. Rhodes…

IMG_3434

I love this field. I don’t know where it’s going to take me. But I do know I want to make an impact. We’ll see how things shake out.

 

Reflection #6 – Eye Tracking

As the end of the course approaches, we focus on eyetracking usability. This is a methodology I’ve always thought was really interesting to try to use for a couple of reasons:

  • When you watch enough usability sessions, it’s easy to wonder if a user is seeing what is on the page
  • The output of gaze plots and heat maps can be interesting to analyze and use for presentations

However, after reading about how to conduct a study like this (“Eyetracking Web Usability” by Nielsen and Pernice is a good resource in this regard) it is clear that costs can be high for equipment, set-up, participant incentives and time to analyze the data is greater than most other usability methodologies. Additionally, this methodology is prone to errors since participants can’t move their head and the equipment can have a hard time calibrating to a participant’s gaze. Eyetracking is more valuable when standard usability testing is done enough so that a majority of issues are uncovered and fixed; at that point, the eyetracking data can be a nice supplement to find other usability concerns.

At my first job, we had a Tobii eye tracking machine but it was only used for one or two projects in my time there. The system worked well, but clients did not feel that the methodology was necessary to learn about the interface. Even at Vistaprint, we have discussed the idea of using eyetracking for various projects. The interesting part is that we can usually determine if a participant sees something in the interface or not by designing tasks that focus on specific functionality of parts of the site. At some points we ask a participant if they saw a particular item, and when they say no we tell them to look for it on the page (sometimes it’s right in front of them or over where their mouse is) and they still don’t see it! With further prodding we find out that the biggest issues lie with relevancy and user goals. If the part of the interface we are testing isn’t compatible with a user’s goals, or it isn’t relevant to what they are trying to do, they will not notice it or ignore it completely. Sometimes the issue can be that the user is so focused on their task that they don’t see anything else; the idea of selective attention applies to the web, but it is also demonstrated in this fun video involving some basketball players.

Eyetracking may help determine if a user actually didn’t see these interface elements on the site, or if they did gaze at them but did not fixate on the elements long enough to register in their brain. But first, it’s important to determine if the user’s goals and expectations align with the functions of the site and address those problems first. For the final week of Usability II, I’ll write a brief review of “Selling Usability: User Experience Infiltration Tactics” in terms of getting people to buy in to user experience and research.

Reflection #5 – More on mobile and gestures

Last week I talked a lot about how to conduct a usability study on mobile devices, since the assignment I was working on was to create a plan for how a lab could be set up to do one of these tests. Well, the assignment is done and I think it’s one of the better things I have produced in terms of practicality and usefulness to the average person who is looking to do a study like this. If you’d like to check out the “Mobile User Experience Research Project Plan: Details to set up a successful mobile usability test” document, then click the link (warning: it’s a PDF).

The rest of this week was focused on a discussion of gestures, which is one of those areas of discussion that brings out differences in opinion between technology types, designers, and those working on the user experience. One thing to consider is that we need to understand whether or not people know what these gestures are; I’d posit that we are at a place in the evolution of mobile device usage where people are being exposed to these gestures constantly and are mostly familiar with the core gestures to use their tablets and phones. This includes swiping and tapping, as well as the pinch/spread move typically associated with zooming in or out on devices. Once we find out if people know what these gestures are (again, I think we can assume this for the base gestures), we can see if they use them on whatever we are testing and what they expect them to do. This is where observation is key for both developers, who are implementing these gesture actions, and designers, who need to artfully create something that suggests interactivity with the gesture in mind.

The issue is that the technology is ever-changing and new gestures are being developed all the time. How do you introduce a new gesture into the market that will gain acceptance and widespread use? Can you accurately teach users to modify their behavior to do a gesture for a given action, and do you need to provide alternative ways of performing the action? Then, the challenge grows when you consider new input methods and the gestures associated with it like the interfaces in Minority Report, as well as the Myo armband mouse and the Leap Motion controller. But for now, these newer input methods suffer a fatal human factors flaw; I don’t know many people who can swing around their arms for hours on end without getting tired.

Reflection #4 – Mobile Research Case Study

As technology advances per Moore’s Law, the number of devices that people use will grow which will introduce more variability in screen resolution/size, processing power, and more. The problem becomes as mobile and tablet users increase, these people will expect to do more on these (sometimes) underpowered devices, including access your website. I have been lucky enough to help spearhead mobile research at Vistaprint as they set on a project to redesign their mobile experience. This post will cover everything from requirements building to conducting actual testing on an in-production site in our lab as we worked on this project.

Read more

Reflection #2 – Moderated vs. Unmoderated

For the first three weeks of this course, we are focusing on remote usability. This week, the objective was to learn more about the differences between moderated and unmoderated research tools and techniques. Again, Nate Bolt’s “Remote Research” is a particularly helpful resource in listing out the methodologies and services you could use to conduct both types of research. To learn more about one unmoderated research tool, we were tasked with conducting a Loop11 study on a website of our choice.

Read more

Reflection #1 – Remote vs. In-Person Usability

[Note: After 7 weeks of Usability I at Kent State, it’s time for Usability II. I’m starting the reflection count over again. Hope you enjoy!]

For the first week of Usability II, we learned all about the pros and cons of remote usability testing vs. in-person sessions. Nate Bolt’s Remote Research gives a great overview of the practice, as well as the details to effectively run a remote study. I have quite a bit of experience determining what methodology to use, and advocating for which would be best for a given study; from my very first experience doing research in college to my current job, I have been convincing stakeholders to use one or both methods for almost 5 years.

Read more

Reflection #6 – Analyze and Report

This, the final week of class, was all about analyzing data and reporting out findings to the team of stakeholders. To me, this is the most arduous and complicated part of the usability process. If I had the choice, I’d pull a Steve Krug and not write a report. Instead, I’d make sure all of the important business owners were involved in the research process from the beginning watching all of the sessions to see the insights for themselves. Then, I’d walk these people through the site and mention the major findings and suggestions for improvements, followed up with an email highlighting these talking points. Alas, this does not work unless you’ve written a book.

Read more

Reflection #5 – Moderation

This week, I was given a script and told to moderate a usability study. This is my favorite part of the whole research process: actually getting to talk to a user and watch them use the product. It’s the culmination of all of the prework and meetings, and because of that I make sure usability is a true event for me, my team, and the team I am working with. The assignment didn’t require that I write a script, but I did need to moderate and record a sessions with an actual user investigating the Papa John’s website and ordering experience.

Read more

Reflection #4 – Qual. and Quant. Metrics

This week’s assignment was all about the difference between qualitative and quantitative usability metrics. When do you use one over the other? What works better? What about subjective vs. objective measurements? And what are the benefits and limitations of each of the different methods for a given scenario? In Jakob Nielsen’s article on usability metrics, he mentions that qualitative measures provide more bang for the buck at understanding the low hanging usability fruit. But the investment in quantitative metrics can be beneficial in its use to track progress across iterations.

Read more