IA Summit 2012 Notes: Crowdsourced Remote Unmoderated Usability Testing

This is part of a series of notes from the Information Architecture Summit from 2012. All posts will be tagged ias12. This talk was presented by Inge De Bleecker (@ingedebleecker on Twitter). The slides are available on Slideshare.

  • Remote usability testing usually has a participant and a moderator in the same session with screen sharing and audio, moderator can ask participant to complete task and ask clarifying questions
  • Unmoderated--participant and moderator not sharing session, you need online means to provide tasks to the participant
  • Crowdsourced--how we go about recruiting participants, outsourced to undefined, large group of people without any constraints
  • Examples
    • Screenshot click test: Usabilla, Userzoom, Usabilityhub
    • Screenshot timed test (participant gets to look for only ~5 seconds, then is asked questions): Userzoom, Usabilityhub
    • Task-based usability study with online survey (longer session, mirrors in-lab testing more closely): Usertesting.com, Loop11, Userzoom, DIY
  • Advantages of remote unmoderated testing
    • use of personal devices, get a nice breadth of devices
    • in own environment
    • fast turnaround time
    • cheap(er)
  • Disadvantages of remote unmoderated testing
    • no additional questions
    • can't observe participant (but you may be able to get them to do screen capture and audio)
  • Process same as most usability tests: recruit, task plan, test, analyze, report
  • Need committed participants for higher quality, better completion rate, longer sessions
    • Compensation: if you pay peanuts you'll get monkeys
      • For one of her clients the sweet spot is $35, but may have to pay more for very specific profiles
    • It can help if the participant has loyalty to something tied to test, e.g. to crowdsourcing company that recruits them b/c crowdsourcing company provides ratings for participants and they want to keep rating up (usertesting.com)
  • Tips for writing a task plan--unmoderated remote testing is high risk, you have to make sure that will go perfectly or you will lose people and completions even if they are highly committed to completing the tasks
    • Participants can't get blocked while completing tasks or answering questions
    • You only get one shot, can't observe so can't help participants work around troubles--but you can babysit results to see if the first few people can get through tasks and questions, fix test if they can't
    • Tasks need to guide without influencing behavior
    • Make all questions required so participants don't get lazy (but make sure there's an answer available for people who really don't know what to do)
    • Encourage people throughout survey to think aloud or write down: generally people are pretty good about doing it
    • Task and question types depend on tool--DIY is more labor intensive, but there are very few constraints relative to commercial tools
  • DIY remote testing: put questions questions in Survey Monkey (or Google Docs), participant must open site in another tab and go back and forth
  • Results are all self-reported data, have to think about how you interpret that--however, people still are explaining their thinking and you can still get a lot of information
  • Can use for tests in a language that the moderator does not speak and use translator
  • At least 10 or often 15-20 participants per profile (bumped up from 8 for lab test)

Key take-home points for me:

I just recently read Nate Bolt's Remote Research (highly recommended, btw) and I have been wanting to put some of that into use. What was most interesting to me about this talk was that she goes one step further, to say that yes, you can have unmoderated tests that mirror the moderated ones, with questions in some external tool like Survey Monkey. That loses even more of the ability to see what the participant is thinking, but it might be something worth trying out while I'm trying out other remote research techniques.

Comments