Question about usability testing
Mara Hancock
mara at media.berkeley.edu
Mon Jul 28 22:09:21 UTC 2008
Great points from John. I would also add two thoughts, one being in
answer to John's question about "usable by whom?" This is exactly one
of the values of creating personas for the design work. Being able to
say, this tool or feature is to meet the needs of X profile/persona.
If we could institute that as a best practice, the incentive may be
that the followers of this practice could do a general call (based on
their iteration cycle) for distributed usability testing. Usability
testing early-on benefits all implementers of Sakai and, in fact,
gives institutions additional "currency" to barter with in terms of
gaining influence and contributing to the community.
For example, if Indiana is working on a new Roster feature for large
enrollment courses, meaningful generally to R1 institutions of a
certain size, I would definitively consider contributing resource
toward coordinating some local UI testing with faculty and TAs who are
heavy constituents of tool would be well worth my team's effort.
What might this look like? I could see some sort of community site
which facilitates such partnering (remember the project matchmaking
tool anybody?)
Mara
On Jul 28, 2008, at 2:23 PM, John Norman wrote:
> Thanks for bringing this up Nathan, and apologies for answering a
> different question to the one you asked :-)
>
> Actually, I'd like to see Sakai developing a design process that DOES
> include usability testing. An interesting problem is usable by whom?
> An ideal scenario would involve institutions recruiting user test
> panels against specific criteria, e.g. teachers at a R1 institution,
> arts and humanities students at a Liberal Arts college, etc. This
> would help us to think about who we are developing for and invite us
> to describe which members of the user community a particular piece of
> functionality has as its target. We would probably want to identify a
> 'usability lead' at each institution and start to develop common
> understandings of which techniques are most useful when, etc. It is
> conceivable Fluid may be able to help, so I am cc'ing Jess Mitchell.
>
> We are at early stages of growing this capability at Cambridge. I
> would like to see it flourish here and elsewhere.
>
> FWIW I am beginning to form a view that iterative usability testing
> should represent about 10% of the developement process. Probably the
> wrong number, but an indicator of its importance.
>
> John
>
> On 28 Jul 2008, at 19:02, Nathan Pearson wrote:
>
> > Over the last few days, I've been trying to research a topic that's
> > been eluding me. I'm wondering if anyone has any information you
> > would be willing to share?
> >
> > We all know that usability testing is part of the the design
> > process, as opposed to a QA process, as sometimes assumed. But
> > given the distributed nature of our development, lack of design
> > resources on local projects, and a myriad of other issues, it's hard
> > to expect usability testing, and even formal design for that matter,
> > to take place on every project. So as a foundation member, I'm
> > trying to imagine a scenario where usability testing takes places as
> > part of the QA.
> >
> > Now this approach obviously has its limitations... one being that
> > usability issues are only caught after-the-fact and by then it
> > becomes a question of whether the testing is only useful for
> > information gathering or whether it should be part of the release
> > criteria?
> >
> > If it's the former, than there is often little incentive to make
> > changes to the software, since it's not consider an official
> > "defect" that has release stopping power. Therefore, we end up
> > relying on the subjective appreciation each developer places on
> > quality.
> >
> > If it's the latter, then how can the foundation, as a central
> > support player scale the testing process? For example, developer A
> > checks his/her code in to the next build. The code is then reviewed
> > by QA and usability testing. QA often has an automation process, so
> > if the developer claims to make a fix, repeating the test is
> > scalable. Usability testing on the other hand requires the
> > recruitement of users, hours spent in the testing process, and so
> on.
> >
> > If the result of the test continues to flag the code as sub-par,
> > preventing it from making it into the release, the developer in
> > theory can continue trying to make improvements until the code
> > adequetly passes the usability standard. But as you've probaly
> > already gathered, this can be an extremely ineffecient model.
> >
> > Anyone have any experience with how to make usabilty testing a
> > scalable operation for release management?
> >
> > If not, what if any suggestions might you have for wrangling some of
> > these usability issues related to local projects that make their way
> > into the code base -- and what role might you see the foundation
> > playing in helpig with this?
> >
> > Thanks,
> > Nathan
> >
> > --
> > Nathan Pearson | UX Lead | Sakai Foundation
> >
> > E. me at nathanpearson.com
> > M. 602.418.5092
> > Y. npearson99 (Yahoo)
> > S. npearson99 (Skype)
> >
> >
> > This automatic notification message was sent by Sakai Collab (https://collab.sakaiproject.org//portal
> > ) from the DG: User site.
> > You can modify how you receive notifications at My Workspace >
> > Preferences.
>
>
> This automatic notification message was sent by Sakai Collab (https://collab.sakaiproject.org//portal
> ) from the DG: User site.
> You can modify how you receive notifications at My Workspace >
> Preferences.
==================================
Mara Hancock
ETS Interim Director
http://ets.berkeley.edu
University of California, Berkeley
Educational Technology Services
9 Dwinelle Hall, #2535
Berkeley, CA 94720
Desk: 510-643-9923
Mobile: 510-407-0543
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://fluidproject.org/pipermail/fluid-talk/attachments/20080728/8b529a47/attachment.html>
More information about the fluid-talk
mailing list