This week our assignment was to create a screener script with qualification questions, and a moderator’s guide with tasks and scenarios. It was perfect for me, because I was working on the same sort of things for a project I am doing at work. The interesting thing about this assignment was we worked in groups to source our questions and tasks.
This isn’t that different from my current work stream: I meet with stakeholders to understand their project, and then create a document that outlines customer/business goals, research questions, and proposed study details. Based on what I know I create a screener that will recruit the right type of user, and then seek feedback from stakeholders to make sure the questions match their expectations. Of course there is discussion and sometimes things change, but I use my knowledge of the recruitment process to explain why things are asked a certain way. A similar process repeats for a moderator’s guide. However, this assignment was a great opportunity for me to stretch my question/task writing muscle, since I use plenty of templates; many screening questions don’t change from study to study, and tasks in a moderator’s guide center around the same type of questions (though I wonder if this is typical of all in-house UX roles…).
One additional point I’ll make: most of the moderating that I do now is focused on the “Listening Lab” methodology that Creative Good has pioneered. In short, the moderation approach is fairly undirected so that we can closely watch a user use the site as they normally would. We don’t interrupt if we don’t have to, and once the user thinks they are done with the task we go through the process again and ask any diagnostic questions that match research goals for the business. This approach doesn’t work for every type of study, but if the site is mostly functional it’s a better way to see what users would actually do than prodding them with many tasks or scenarios.
If you’re interested in seeing my work for this assignment, take a look here (PDF).