Formal design review process

23 Apr 2008 - 10:43am
6 years ago
4 replies
2911 reads
Elise Edson
2007

Hi folks,

My Human Factors team colleagues and I are working to formalize our
processes at our company, and one of the areas that is new for us is the
formal design review. Right now, our UX designs are reviewed as part of the
software requirements specification (SRS), but we've found that reviewing
them at this level elicits requirements rather than design feedback (big
surprise :-)).

How do you conduct design reviews at your company? Who generally signs off
on the designs? What are the inputs/outputs for the review (wireframes,
full interaction specification, etc.)? How about recommended
books/templates/resources on this topic?

Thanks!
Elise

Comments

23 Apr 2008 - 6:06pm
dszuc
2005

Hi Elise:

A few starters -

* Walking Through Your Product Design With Stakeholders - http://www.uxmatters.com/MT/archives/000199.php

* Cognitive Walkthrough and Heuristic Evaluation in the Contemporary
Design Process - http://www.apogeehk.com/articles/Cognitive_Walkthrough_and_Heuristic_Evaluation_in_the_Contemporary_Design_Process.html

rgds,
Dan
--
Daniel Szuc
Principal Usability Consultant
Apogee Usability Asia Ltd
www.apogeehk.com
Usability in Asia

The Usability Kit - www.theusabilitykit.com

On 24 Apr 2008, at 12:43 AM, Elise Edson wrote:

> Hi folks,
>
> My Human Factors team colleagues and I are working to formalize our
> processes at our company, and one of the areas that is new for us is
> the
> formal design review. Right now, our UX designs are reviewed as
> part of the
> software requirements specification (SRS), but we've found that
> reviewing
> them at this level elicits requirements rather than design feedback
> (big
> surprise :-)).
>
> How do you conduct design reviews at your company? Who generally
> signs off
> on the designs? What are the inputs/outputs for the review
> (wireframes,
> full interaction specification, etc.)? How about recommended
> books/templates/resources on this topic?
>
> Thanks!
> Elise
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help

23 Apr 2008 - 8:18pm
Chauncey Wilson
2007

Hello Elise,

There are a number of methods that go by various names: walkthroughs,
inspections, design reviews, and expert reviews. Some of the
approaches that you might consider in addition to those that Daniel
listed are listed below. These vary in the degree of formality with
the formal usability inspection following many of the ground rules of
software inspections. There are some very good books on how to
conduct software inspections that I've round useful. There are
several approaches that I describe below that ask reviewers to adopt
different perspectives. I've found this approach quite powerful. You
might, for example, ask someone to adopt the perspective of a brand
new user and another person to be the consistency czar and another
person be the work-flow-efficiency inspector.

Chauncey

Participatory Heuristic Evaluation (Muller, Matheson, Page, & Gallup,
1995). This is a variation on the heuristic evaluation that includes
users as well as members of the product team. Muller and his
colleagues also added heuristics that dealt with task and work support
issues.

Cooperative Evaluation (Monk, Wright, Haber, & Davenport, 1993)
Monk and his colleagues (1993) published a procedural guide to a
technique they called "cooperative evaluation". Cooperative evaluation
involves pairing a user and designer in an evaluation of a working
version of a product. In the cooperative evaluation, users can freely
ask questions of the designer and the designer can ask questions of
the user.

Heuristic Walkthrough (Sears, 1997)
Sears (1997) developed a technique called a "heuristic walkthrough"
that had some of the attributes of three UCD methods: a heuristic
evaluation, a perspective-based inspection and a cognitive
walkthrough. In Sears' method, the evaluators were given a prioritized
list of user tasks, a set of heuristics, and "thought-provoking"
questions derived from the cognitive walkthrough method.

Persona-based inspections.
In this type of inspection, the reviewers take on the perspective of
the key personas and work through a set of tasks. This approach may
yield different problems for different personas.

Perspective-based inspections.
A perspective-based user interface inspection requires that one or
more individuals evaluate a product's user interface from different
perspectives. The use of multiple perspectives (similar to
role-playing) is meant to broaden the problem-finding ability of
evaluators (Virzi, 1997), especially those colleagues with little or
no background in usability or user interface design.
In perspective-based inspections, inspectors are generally given
descriptions of one or more perspectives that they are to focus on, a
list of user tasks, a set of questions related to the perspective, and
possibly a set of heuristics related to the perspective. Inspectors
are asked to work through tasks from the assigned perspectives.

The Structured Heuristic Evaluation Method
Kurosu, Matsuura, and Sugizakiu (1997) proposed a variation on the
heuristic evaluation called the structured heuristic evaluation method
(sHEM). The sHEM involves multiple evaluation sessions with each
session focusing on one category of usability and a set of associated
heuristics that defined each category. The categories of usability in
Kurosu's sHEM were:
1. Ease of cognition (part 1).
2. Ease of operation.
3. Ease of cognition (part 2).
4. Pleasantness.
5. Novice versus expert users.
6. Users with special care (this category dealt with very young and
elderly users; users who had visual, hearing, or physical
disabilities; left-handed users, and color-blind users).

Cognitive efficiency was the rationale for focusing on only one
category of usability during a session. Kurosu and his colleagues felt
that trying to keep many heuristics in mind while reviewing a product
was difficult for evaluators.

Cognitive Walkthrough
The cognitive walkthrough (CW) is a usability inspection technique
that focuses primarily on the ease of learning of a product. The
cognitive walkthrough is based on a theory that users often learn how
to use a product through a process of exploration, not through formal
training courses (Polson & Lewis, 1990). The cognitive walkthrough was
originally designed to evaluate "walk-up-and-use" interfaces (for
example museum kiosks, postage machines, and ATM machines), but has
been applied to more complex products (CAD systems, operating
procedures, software development tools) that support new and
infrequent users (Wharton, Bradford, Jeffries, & Franzke, 1992;
Novick, 1999). The cognitive walkthrough is based on the concept of a
hypothetical user and does not require any actual users.

Streamlined Cognitive Walkthrough
Rick Spencer developed a simplified version of the cognitive
walkthrough that was more applicable to fast-paced development
environments.
Spencer, R. (2000). The streamlined cognitive walkthrough method,
working around social constraints encountered in a software
development company. Proceedings of ACM CHI 2000 Conference on Human
Factors in Computing Systems (pp. 353-359). New York: ACM Press.

Pluralistic Walkthrough
The pluralistic walkthrough is a group usability evaluation that
follows a predefined set of task scenarios. A facilitator presents the
participants with an image of the interface for each step in a task.
Participants are asked to decide what actions they would take for each
step to get to the next step in the task (independently without
discussion) and to write those actions down on walkthrough forms
containing the screen shots of the user interface. When everyone has
written down their actions for a specific step in a task, the
facilitator reveals the "correct answer" and invites the group to
discuss their answers.

Collaborative Usability Inspection
Constantine and Lockwood (1999) describe a method called
"collaborative usability inspection" that melds the pluralistic
walkthrough and heuristic evaluation methods. The focus of the
collaborative usability inspections is on rapid identification of
usability defects. Like the pluralistic review, Constantine and
Lockwood ask the inspection team to put themselves in the role of user
by cultivating a "practiced naiveté" (p. 404).

Formal Usability Inspections
The formal usability inspection is a method that is derived from
formal software inspections (Kahn & Prail, 1994). Formal usability
inspections have a clearly defined process with trained inspectors,
explicit roles for members of the inspection team, and a set of
defined activities and explicit ground rules (Wiegers, 2002; Kahn &
Prail, 1994).

Individual Expert Review
An individual expert review can incorporate components of think-aloud
testing, heuristic evaluation, checklist reviews, perspective-based
inspections, and other evaluation methods. The key difference between
an individual expert review and other inspection methods is that here
a single individual is responsible for generating a list of problems,
and often solutions to those problems, without substantive help from
others.

Consistency Inspection
A consistency inspection is used to find different types of
inconsistencies in a product. The types of inconsistencies that are
the target for this type of inspection include (Nielsen, 1989):
• Visual inconsistencies (layout, color, and graphic design
differences where none would be expected).
• Interaction inconsistencies (different ways to do the same thing).
• Control inconsistencies (for example, different pages on the same
Web site use different calendar controls).
• Inconsistencies between the system model and the user's mental model
(Grudin, 1989))
• Error prevention inconsistencies (for example, in one case you
provide a cue on the required format for phone numbers while
elsewhere, you have no cues and get an error message if you use the
wrong format)
• Terminology inconsistencies ("Login" and "Log in" are both used as
labels in different parts of the site or product and "sign-in" is used
in the Help for the product).

>
>
>
>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help
>

25 Apr 2008 - 10:40am
Chauncey Wilson
2007

Hi,

here are a few more references on formal inspections.

Gunn, C. (1995). An Example of Formal Usability Inspections in
Practice at Hewlett-Packard Company. CHI'95 Proceedings Interactive
Posters.

IEEE Std 1028-1997. (1999). IEEEE Standard for software reviews. IEEE
Standards Software Engineering. New York: The Institute of Electrical
and Electronics Engineerings, Inc.

Kahn, M.K., & Prail, A., (1994). Formal Usability Inspections in
J.Nielsen and R. L. Mack, (Eds.). Us- ability Inspection Methods. John
Wiley & Sons, Inc. New York. 141-171. This chapter has a reasonably
complete write-up on how to conduct formal usability inspections.
References

Freedman, D. P., Weinberg, G. W. (1990). Handbook of walkthroughs,
inspections, and technical reviews: Evaluating programs, projects, and
products (Third Edition).New York, NY: Dorset House Publishing.

On Thu, Apr 24, 2008 at 1:18 PM, Chauncey Wilson
<chauncey.wilson at gmail.com> wrote:
> Hi,
>
> You are welcome. I'm writing a book on UCD methods and have sections
> on each of the methods I listed. I have a chapter on Formal Usability
> Inspections that combines some work in UCD with software development
> inspections as well as a few other chapters. I would be happy to kick
> around ideas with you. One thing that comes up a lot in the
> literature is having a task-based approach where you have the product
> team prioritize scenarios of use and then have the reviewers walk
> through those and then examine other aspects of the system. Most of
> the methods call for people to do some independent work and then come
> together as a group.
>
> Thanks for the kind words.
> Chauncey
>
>
> On Thu, Apr 24, 2008 at 12:23 PM, Elise Edson <elise.edson at gmail.com> wrote:
> > Thank you so much! I never thought about doing heuristic evals as part of
> > the design review - let alone as participants to "act out" the part of a
> > user type! Thanks for the great ideas and also for taking the time to list
> > these detailed references - I'm looking forward to reading about each one!
> >
> > Elise
> >
> >
> >
> > On Wed, Apr 23, 2008 at 7:18 PM, Chauncey Wilson <chauncey.wilson at gmail.com>
> > wrote:
> > > Hello Elise,
> > >
> > > There are a number of methods that go by various names: walkthroughs,
> > > inspections, design reviews, and expert reviews. Some of the
> > > approaches that you might consider in addition to those that Daniel
> > > listed are listed below. These vary in the degree of formality with
> > > the formal usability inspection following many of the ground rules of
> > > software inspections. There are some very good books on how to
> > > conduct software inspections that I've round useful. There are
> > > several approaches that I describe below that ask reviewers to adopt
> > > different perspectives. I've found this approach quite powerful. You
> > > might, for example, ask someone to adopt the perspective of a brand
> > > new user and another person to be the consistency czar and another
> > > person be the work-flow-efficiency inspector.
> > >
> > > Chauncey
> > >
> > > Participatory Heuristic Evaluation (Muller, Matheson, Page, & Gallup,
> > > 1995). This is a variation on the heuristic evaluation that includes
> > > users as well as members of the product team. Muller and his
> > > colleagues also added heuristics that dealt with task and work support
> > > issues.
> > >
> > > Cooperative Evaluation (Monk, Wright, Haber, & Davenport, 1993)
> > > Monk and his colleagues (1993) published a procedural guide to a
> > > technique they called "cooperative evaluation". Cooperative evaluation
> > > involves pairing a user and designer in an evaluation of a working
> > > version of a product. In the cooperative evaluation, users can freely
> > > ask questions of the designer and the designer can ask questions of
> > > the user.
> > >
> > > Heuristic Walkthrough (Sears, 1997)
> > > Sears (1997) developed a technique called a "heuristic walkthrough"
> > > that had some of the attributes of three UCD methods: a heuristic
> > > evaluation, a perspective-based inspection and a cognitive
> > > walkthrough. In Sears' method, the evaluators were given a prioritized
> > > list of user tasks, a set of heuristics, and "thought-provoking"
> > > questions derived from the cognitive walkthrough method.
> > >
> > > Persona-based inspections.
> > > In this type of inspection, the reviewers take on the perspective of
> > > the key personas and work through a set of tasks. This approach may
> > > yield different problems for different personas.
> > >
> > > Perspective-based inspections.
> > > A perspective-based user interface inspection requires that one or
> > > more individuals evaluate a product's user interface from different
> > > perspectives. The use of multiple perspectives (similar to
> > > role-playing) is meant to broaden the problem-finding ability of
> > > evaluators (Virzi, 1997), especially those colleagues with little or
> > > no background in usability or user interface design.
> > > In perspective-based inspections, inspectors are generally given
> > > descriptions of one or more perspectives that they are to focus on, a
> > > list of user tasks, a set of questions related to the perspective, and
> > > possibly a set of heuristics related to the perspective. Inspectors
> > > are asked to work through tasks from the assigned perspectives.
> > >
> > > The Structured Heuristic Evaluation Method
> > > Kurosu, Matsuura, and Sugizakiu (1997) proposed a variation on the
> > > heuristic evaluation called the structured heuristic evaluation method
> > > (sHEM). The sHEM involves multiple evaluation sessions with each
> > > session focusing on one category of usability and a set of associated
> > > heuristics that defined each category. The categories of usability in
> > > Kurosu's sHEM were:
> > > 1. Ease of cognition (part 1).
> > > 2. Ease of operation.
> > > 3. Ease of cognition (part 2).
> > > 4. Pleasantness.
> > > 5. Novice versus expert users.
> > > 6. Users with special care (this category dealt with very young and
> > > elderly users; users who had visual, hearing, or physical
> > > disabilities; left-handed users, and color-blind users).
> > >
> > > Cognitive efficiency was the rationale for focusing on only one
> > > category of usability during a session. Kurosu and his colleagues felt
> > > that trying to keep many heuristics in mind while reviewing a product
> > > was difficult for evaluators.
> > >
> > > Cognitive Walkthrough
> > > The cognitive walkthrough (CW) is a usability inspection technique
> > > that focuses primarily on the ease of learning of a product. The
> > > cognitive walkthrough is based on a theory that users often learn how
> > > to use a product through a process of exploration, not through formal
> > > training courses (Polson & Lewis, 1990). The cognitive walkthrough was
> > > originally designed to evaluate "walk-up-and-use" interfaces (for
> > > example museum kiosks, postage machines, and ATM machines), but has
> > > been applied to more complex products (CAD systems, operating
> > > procedures, software development tools) that support new and
> > > infrequent users (Wharton, Bradford, Jeffries, & Franzke, 1992;
> > > Novick, 1999). The cognitive walkthrough is based on the concept of a
> > > hypothetical user and does not require any actual users.
> > >
> > > Streamlined Cognitive Walkthrough
> > > Rick Spencer developed a simplified version of the cognitive
> > > walkthrough that was more applicable to fast-paced development
> > > environments.
> > > Spencer, R. (2000). The streamlined cognitive walkthrough method,
> > > working around social constraints encountered in a software
> > > development company. Proceedings of ACM CHI 2000 Conference on Human
> > > Factors in Computing Systems (pp. 353-359). New York: ACM Press.
> > >
> > > Pluralistic Walkthrough
> > > The pluralistic walkthrough is a group usability evaluation that
> > > follows a predefined set of task scenarios. A facilitator presents the
> > > participants with an image of the interface for each step in a task.
> > > Participants are asked to decide what actions they would take for each
> > > step to get to the next step in the task (independently without
> > > discussion) and to write those actions down on walkthrough forms
> > > containing the screen shots of the user interface. When everyone has
> > > written down their actions for a specific step in a task, the
> > > facilitator reveals the "correct answer" and invites the group to
> > > discuss their answers.
> > >
> > > Collaborative Usability Inspection
> > > Constantine and Lockwood (1999) describe a method called
> > > "collaborative usability inspection" that melds the pluralistic
> > > walkthrough and heuristic evaluation methods. The focus of the
> > > collaborative usability inspections is on rapid identification of
> > > usability defects. Like the pluralistic review, Constantine and
> > > Lockwood ask the inspection team to put themselves in the role of user
> > > by cultivating a "practiced naiveté" (p. 404).
> > >
> > > Formal Usability Inspections
> > > The formal usability inspection is a method that is derived from
> > > formal software inspections (Kahn & Prail, 1994). Formal usability
> > > inspections have a clearly defined process with trained inspectors,
> > > explicit roles for members of the inspection team, and a set of
> > > defined activities and explicit ground rules (Wiegers, 2002; Kahn &
> > > Prail, 1994).
> > >
> > > Individual Expert Review
> > > An individual expert review can incorporate components of think-aloud
> > > testing, heuristic evaluation, checklist reviews, perspective-based
> > > inspections, and other evaluation methods. The key difference between
> > > an individual expert review and other inspection methods is that here
> > > a single individual is responsible for generating a list of problems,
> > > and often solutions to those problems, without substantive help from
> > > others.
> > >
> > > Consistency Inspection
> > > A consistency inspection is used to find different types of
> > > inconsistencies in a product. The types of inconsistencies that are
> > > the target for this type of inspection include (Nielsen, 1989):
> > > • Visual inconsistencies (layout, color, and graphic design
> > > differences where none would be expected).
> > > • Interaction inconsistencies (different ways to do the same thing).
> > > • Control inconsistencies (for example, different pages on the same
> > > Web site use different calendar controls).
> > > • Inconsistencies between the system model and the user's mental model
> > > (Grudin, 1989))
> > > • Error prevention inconsistencies (for example, in one case you
> > > provide a cue on the required format for phone numbers while
> > > elsewhere, you have no cues and get an error message if you use the
> > > wrong format)
> > > • Terminology inconsistencies ("Login" and "Log in" are both used as
> > > labels in different parts of the site or product and "sign-in" is used
> > > in the Help for the product).
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > >
> > > >
> > > >
> > > >
> > > > ________________________________________________________________
> > > > Welcome to the Interaction Design Association (IxDA)!
> > > > To post to this list ....... discuss at ixda.org
> > > > Unsubscribe ................ http://www.ixda.org/unsubscribe
> > > > List Guidelines ............ http://www.ixda.org/guidelines
> > > > List Help .................. http://www.ixda.org/help
> > > >
> > >
> >
> >
>

24 Apr 2008 - 7:44pm
Suba Periyasami
2008

How do you conduct design reviews at your company? Who generally signs off
on the designs? What are the inputs/outputs for the review (wireframes,
full interaction specification, etc.)? How about recommended
books/templates/resources on this topic?

----------------------------------------------
Design reviews with the product development team are the most interesting
and challenging activity. We use low-fidelity prototypes (that demonstrates
sequence of interactions) during design reviews. A short review is first
conducted with the design team or with couple of co-designers to get
feedback on the designs. This also gives the designer a chance to check if
he has adequate convincing explanations for why the designs/interactions are
designed in a certain way. During the design review with product team, the
designer leads the session telling the story explaning what the product
requirements are, the different concepts he came up with, scenarios where
some concepts wouldnt work and what his final concept is. The team might
accept the design or propose changes and the designer should be prepared to
answer why the changes would work best or wouldnt work for the user in a
given scenario. A final verbal agreement is made between the product lead
and the designer at the end of the session. There might be list of changes
to be made to the design or the design might be agreed as the final design.
The meeting notes, agreement or proposals are documented somewhere at the
bottom of the wireframe so the team can look back at any stage during
the cycle and recollect why such decisions were made.

-Suba Periyasami

Syndicate content Get the feed