Posts

Reflection: Week #2 – Sketching

In my weekly meeting with my manager at work, we chat about what’s going well, what I’m working on, what challenges I am facing, and what I am doing to try to improve the situation around them. One of the things I am working on is a wiki post about our team’s recent offsite, the learnings from the meeting, and the incoming improvements to how we work. I explained that I wanted more feedback and edits from others before I posted it, because I was sure I was missing something. He told me that I should go ahead and post anyway, that the wiki is a place where edits are made constantly, and that information is changing all the time. He recognized that we were similar in the fact that we both want to try to perfect what we are working on before we post, and while it can be a quality that serves us well at some points, we need to learn to iterate and move forward.

This may sound like a tangential story, but it applies directly to me and how I feel about wireframes. As I started sketching different screens for the Lunch Money Buddy app, I realized I was taking too much time working at perfecting the details of the individual screens and spending less time on iterating/ideating on different structures and layouts. Determining the best fidelity when sketching can be difficult, especially when I’m trying to make things look just so… even if this is not the stage to be doing that. I tried to make sure I focused more on coming up with some iterations for each screen, rather than going into the minutiae for one screen at a time.

I will say, having prefab mobile sketching templates has been a bit of a mixed blessing; while I don’t have to constantly draw a device, I think the formality of the medium has caused me to over think the process of drawing up ideas! I would love to try creating copies of my common navigation/interaction elements to create a “toolbox” of items that I can just place from screen to screen (of course, this is not unlike pattern libraries in most wireframing software). Nevertheless, I look forward to getting feedback from my team members, and putting my sketches into something more digital and refined!

Until next week…

Reflection #5 – More on mobile and gestures

Last week I talked a lot about how to conduct a usability study on mobile devices, since the assignment I was working on was to create a plan for how a lab could be set up to do one of these tests. Well, the assignment is done and I think it’s one of the better things I have produced in terms of practicality and usefulness to the average person who is looking to do a study like this. If you’d like to check out the “Mobile User Experience Research Project Plan: Details to set up a successful mobile usability test” document, then click the link (warning: it’s a PDF).

The rest of this week was focused on a discussion of gestures, which is one of those areas of discussion that brings out differences in opinion between technology types, designers, and those working on the user experience. One thing to consider is that we need to understand whether or not people know what these gestures are; I’d posit that we are at a place in the evolution of mobile device usage where people are being exposed to these gestures constantly and are mostly familiar with the core gestures to use their tablets and phones. This includes swiping and tapping, as well as the pinch/spread move typically associated with zooming in or out on devices. Once we find out if people know what these gestures are (again, I think we can assume this for the base gestures), we can see if they use them on whatever we are testing and what they expect them to do. This is where observation is key for both developers, who are implementing these gesture actions, and designers, who need to artfully create something that suggests interactivity with the gesture in mind.

The issue is that the technology is ever-changing and new gestures are being developed all the time. How do you introduce a new gesture into the market that will gain acceptance and widespread use? Can you accurately teach users to modify their behavior to do a gesture for a given action, and do you need to provide alternative ways of performing the action? Then, the challenge grows when you consider new input methods and the gestures associated with it like the interfaces in Minority Report, as well as the Myo armband mouse and the Leap Motion controller. But for now, these newer input methods suffer a fatal human factors flaw; I don’t know many people who can swing around their arms for hours on end without getting tired.

Reflection #4 – Mobile Research Case Study

As technology advances per Moore’s Law, the number of devices that people use will grow which will introduce more variability in screen resolution/size, processing power, and more. The problem becomes as mobile and tablet users increase, these people will expect to do more on these (sometimes) underpowered devices, including access your website. I have been lucky enough to help spearhead mobile research at Vistaprint as they set on a project to redesign their mobile experience. This post will cover everything from requirements building to conducting actual testing on an in-production site in our lab as we worked on this project.

Read more