Fwd: [IxDA Discuss] Online, unmoderated user testing tool
Daphne Ogle
daphne at media.berkeley.edu
Wed Apr 8 20:30:32 UTC 2009
A new tool available AND and some interesting thoughts from a seasoned
testing and UX guru.
Enjoy!
-Daphne
Begin forwarded message:
> From: Jared Spool <jspool at uie.com>
> Date: April 8, 2009 4:39:57 AM PDT
> To: Toby Biddle <toby at UsabilityOne.com>
> Cc: discuss at ixda.org
> Subject: Re: [IxDA Discuss] Online, unmoderated user testing tool
>
>
> On Apr 7, 2009, at 11:04 PM, Toby Biddle wrote:
>
>> A new online, unmoderated user testing tool has recently launched -
>> www.loop11.com. Anyone used it have any thoughts?
>
> We haven't used this tool in particular, but, from their site, it
> looks similar to a slew of other tools on the market.
>
> These tools are limited in value because of four key factors:
>
> 1) The pool of invited participants is critically important. In
> Loop11, it seems you have to invite your own pool , which means you
> have to use standard recruitment techniques to source, schedule, and
> incent participants in the study. This will probably triple (or
> more) the costs. (Many unmoderated tools offer their own pre-
> recruited pools, which keeps costs down, but are often low quality
> participants, such as people who only participate to get the
> incentive and don't really use the design.)
>
> 2) You are limited in the tasks your participants can perform. For
> the software to work, the site has to know when a task is completed.
> For example, when evaluating a travel site, you have to know what
> page the user will end up on. If the confirmation page for a trip
> booking is computer generated, this might not be possible. Even if
> it is, can the system tell if all the values were properly entered?
>
> 3) We know from our research at UIE that participants who are
> actually interested in the task (for example, currently planning a
> vacation in Paris) will behave substantially differently than those
> who are asked to pretend to do a task. They take more time, are more
> discriminating on the results, are more likely to be frustrated when
> key information is missing, and are more likely to be delighted when
> the design meets their needs. Yet, these systems usually require
> that every user take the same path through the system, which means
> recruiting people with identical interests (every participant has to
> be actively planning their vacation to Paris and desiring the same
> dates & hotel requirements).
>
> 4) The site reports standard analytic measures: time on task, "fail
> pages", common navigation paths. But it's extremely difficult to
> come to the correct inference based on these measures. For example,
> does longer time-on-task or time-on-page imply frustration or
> interest? Does a deviation from the common navigation path imply
> clicking on the wrong element or curious exploration of additional
> features? Without talking to the individual, it's hard to even know
> if a reported measure is good or bad, let alone the action the team
> should take based on the reported result.
>
> In the ten years since I first started seeing these tools on the
> market. I've never seen results from a study that the team could
> actually interpret and act on. In one study a few years back with a
> major electronics retailer, we conducted an in-lab study with 18
> highly-qualified participants that was comparable to a 60-
> participant Netraker (a Loop11 competitor from the past). The task
> was to find the laptop computer of your dreams and put it in the cart.
>
> In our study, all 18 participants were in the market to buy laptops,
> had spent at least a week thinking about the laptop they wanted and
> its requirements, and were given the cash to make the purchase (they
> would keep the laptop after the study). In the Netraker study, they
> 60 randomly selected participants from a panel of thousands who
> reportedly were in the demographic groups of the site (unverfiable)
> and hadn't thought about laptop purchases until the instructions for
> the test had popped up.
>
> In the Netraker results, 94% of the participants completed the tasks
> and the average time was 1m 18s. In our study, only 33% of the
> participants completed the task and the average time was 18 minutes.
>
> Why do you think there were such striking differences? Which study
> would you pay more attention to?
>
> Beware of VooDoo measurement techniques.
>
> Hope that helps,
>
> Jared
>
> Jared M. Spool
> User Interface Engineering
> 510 Turnpike St., Suite 102, North Andover, MA 01845
> e: jspool at uie.com p: +1 978 327 5561
> http://uie.com Blog: http://uie.com/brainsparks Twitter: jmspool
> UIE Web App Summit, 4/19-4/22: http://webappsummit.com
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help
Daphne Ogle
Senior Interaction Designer
University of California, Berkeley
Educational Technology Services
daphne at media.berkeley.edu
cell (510)847-0308
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://fluidproject.org/pipermail/fluid-talk/attachments/20090408/eda92a60/attachment.html>
More information about the fluid-talk
mailing list