Fwd: Statistical analysis in interaction design

11 May 2005 - 5:33pm
9 years ago
3 replies
524 reads
Steve Baty
2009

Just to assist in moving the dialogue a little, my own recent work used a
combination of the following techniques:

- Simple Cross-tabulation of recorded data to aid in 'understanding' the
data set;
- Determinant analysis of the data set to sort out the meaningful
(independent) from the non-meaningful (dependent) variables;
- K-Means clustering analysis of the independent variables (three in total)
to create 'groups' of data records
- multi-variate analysis of variance (ANOVA) on the independent variables
for two sub-groups (by location of recording) to test for significant
difference between data sets;
- ANOVA tests on single variables to identify which specific variables were
responsible for any significant differences;

The results were used in a variety of ways. First, they allowed a
complicated set of data to be trimmed down to just three significant
variables, making the overall analysis much easier to carry out and
interpret. Second, the clusters provided a set of default values for screen
options in several parts of the application/site being designed. The cluster
analysis showed that nearly 85% of cases could be accounted for by three
simple combinations of the significant variables, so we are looking at
building these in as preset options, with customised controls for the other
cases.

The analysis of variance showed us that, although behaviour was different at
the two locations, the variance could be explained by a variable outside the
direct influence of the system under design. This allowed us to extrapolate
the data set to a national audience (we sampled 2 of 148 locations) with
some level of confidence.

The cross-tabulation results, whilst not as statistically 'strong' as the
other techniques, have proven to much more 'digestible' by the senior
management team of our client. So in that respect alone it was worth
carrying out the work.

Regards,

Steve Baty
Senior Analyst, Red Square

---------- Forwarded message ----------
From: Doc <stevebaty at gmail.com>
Date: 10-May-2005 13:28
Subject: Statistical analysis in interaction design
To: discuss at ixdg.org

<snip>

A quick check around the group showed a range from zero statistical analysis
of data up to multi-variate statistical techniques across 6+ variables.

The question I would like to raise in this forum is this: what are the most
appropriate statistical analysis techniques (versus the most convenient) to
use during the various stages of an IxD project, and how widespread is their
use?

</snip>

Comments

12 May 2005 - 3:59pm
Pradyot Rai
2004

On 5/11/05, Doc <stevebaty at gmail.com> wrote:
> - Simple Cross-tabulation of recorded data to aid in 'understanding' the
> data set;
> - Determinant analysis of the data set to sort out the meaningful
> (independent) from the non-meaningful (dependent) variables;
> - K-Means clustering analysis of the independent variables (three in total)
> to create 'groups' of data records
> - multi-variate analysis of variance (ANOVA) on the independent variables
> for two sub-groups (by location of recording) to test for significant
> difference between data sets;
> - ANOVA tests on single variables to identify which specific variables were
> responsible for any significant differences;

You are in a wrong profession :o)
Designers (Usability folks included) are statistically dumb. We work
more on qualitative side of things ;-)

> The results were used in a variety of ways. First, they allowed a
> complicated set of data to be trimmed down to just three significant
> variables, making the overall analysis much easier to carry out and
> interpret.

I am little clueless. How are you getting the data - Surveys,
usability tests?? I am missing the context...

> Second, the clusters provided a set of default values for screen
> options in several parts of the application/site being designed. The cluster
> analysis showed that nearly 85% of cases could be accounted for by three
> simple combinations of the significant variables, so we are looking at
> building these in as preset options, with customised controls for the other
> cases.

I am surprised if anyone will say they have used statistical methods
for arriving at what screen options to choose from. Probably because
design decisions are never so significant to be backed up by conjoint,
cluster or any other statistical analysis. Also, because many
qualitative champions will tell you statistical methods are not best
either. They completely ignore the creative part of the brain.

> The analysis of variance showed us that, although behaviour was different at
> the two locations, the variance could be explained by a variable outside the
> direct influence of the system under design. This allowed us to extrapolate
> the data set to a national audience (we sampled 2 of 148 locations) with
> some level of confidence.
>
> The cross-tabulation results, whilst not as statistically 'strong' as the
> other techniques, have proven to much more 'digestible' by the senior
> management team of our client. So in that respect alone it was worth
> carrying out the work.

My belief is that you are coming from a strong Marketing background.
All these analysis is done on Surveys, Polls and various other Market
Reseach areas. This is sheldom a case in interaction design. This
profession is driven by processes, artifacts and creative persuits,
where margin of error for any decision doesn't matter.

Don't get me wrong.

Prady

13 May 2005 - 12:54am
KaushiktGhosh
2005

I am surprised if anyone will say they have used
> statistical methods
> for arriving at what screen options to choose from.
> Probably because
> design decisions are never so significant to be
> backed up by conjoint,
> cluster or any other statistical analysis. Also,
> because many
> qualitative champions will tell you statistical
> methods are not best
> either. They completely ignore the creative part of
> the brain.

I would beg to differ here. Well there are, I think,
two things. One is to have the all important buy-in
from the other (supposedly more important) functions
like Business Dev, marketing etc. that becomes so
essential for the research or design dept. to carry on
with their lives!!!!

Two, most critically, any idea, insight or study
framework can not succeed unless you have picked the
right thread. It is very likely for the initial
brainwaves to pick some noise along with the core idea
or inspiration. A detailed analysis of opportunity
(with the help of statistics) can lead you to a highly
optimized method or effort to have the right kind of
intervention in the concerned problem space.

When it comes to analysing results of an intervention
or doing a comparative evaluation that essentially is
summative in nature, statistical derivation only helps
you to bolster your arguements that you had formed in
the formative stage (may be thru qualitative methods)
or it might throw a complete surprize, shattering your
previous assumptions.

So the idea is not to have a statistical 'trojan
horse' and obscure the truly 'rich' qualitiative feel
with mind-bending jargons but to achieve a complete
balance of 'qual' and 'quant' as complementary to each
other that has become quite well known today as the
'mixed' methods.

thnx
kaushik

--- Pradyot Rai <pradyotrai at gmail.com> wrote:
> [Please voluntarily trim replies to include only
> relevant quoted material.]
>
> On 5/11/05, Doc <stevebaty at gmail.com> wrote:
> > - Simple Cross-tabulation of recorded data to aid
> in 'understanding' the
> > data set;
> > - Determinant analysis of the data set to sort out
> the meaningful
> > (independent) from the non-meaningful (dependent)
> variables;
> > - K-Means clustering analysis of the independent
> variables (three in total)
> > to create 'groups' of data records
> > - multi-variate analysis of variance (ANOVA) on
> the independent variables
> > for two sub-groups (by location of recording) to
> test for significant
> > difference between data sets;
> > - ANOVA tests on single variables to identify
> which specific variables were
> > responsible for any significant differences;
>
> You are in a wrong profession :o)
> Designers (Usability folks included) are
> statistically dumb. We work
> more on qualitative side of things ;-)
>
> > The results were used in a variety of ways. First,
> they allowed a
> > complicated set of data to be trimmed down to just
> three significant
> > variables, making the overall analysis much easier
> to carry out and
> > interpret.
>
> I am little clueless. How are you getting the data -
> Surveys,
> usability tests?? I am missing the context...
>
> > Second, the clusters provided a set of default
> values for screen
> > options in several parts of the application/site
> being designed. The cluster
> > analysis showed that nearly 85% of cases could be
> accounted for by three
> > simple combinations of the significant variables,
> so we are looking at
> > building these in as preset options, with
> customised controls for the other
> > cases.
>
> I am surprised if anyone will say they have used
> statistical methods
> for arriving at what screen options to choose from.
> Probably because
> design decisions are never so significant to be
> backed up by conjoint,
> cluster or any other statistical analysis. Also,
> because many
> qualitative champions will tell you statistical
> methods are not best
> either. They completely ignore the creative part of
> the brain.
>
> > The analysis of variance showed us that, although
> behaviour was different at
> > the two locations, the variance could be explained
> by a variable outside the
> > direct influence of the system under design. This
> allowed us to extrapolate
> > the data set to a national audience (we sampled 2
> of 148 locations) with
> > some level of confidence.
> >
> > The cross-tabulation results, whilst not as
> statistically 'strong' as the
> > other techniques, have proven to much more
> 'digestible' by the senior
> > management team of our client. So in that respect
> alone it was worth
> > carrying out the work.
>
> My belief is that you are coming from a strong
> Marketing background.
> All these analysis is done on Surveys, Polls and
> various other Market
> Reseach areas. This is sheldom a case in interaction
> design. This
> profession is driven by processes, artifacts and
> creative persuits,
> where margin of error for any decision doesn't
> matter.
>
> Don't get me wrong.
>
> Prady
> _______________________________________________
> Welcome to the Interaction Design Group!
> To post to this list ....... discuss at ixdg.org
> (Un)Subscription Options ...
> http://discuss.ixdg.org/
> Announcements List .........
> http://subscribe-announce.ixdg.org/
> Questions .................. lists at ixdg.org
> Home ....................... http://ixdg.org/
>

__________________________________
Do you Yahoo!?
Read only the mail you want - Yahoo! Mail SpamGuard.
http://promotions.yahoo.com/new_mail

13 May 2005 - 11:30am
Lada Gorlenko
2004

PR> My belief is that you are coming from a strong Marketing background.
PR> All these analysis is done on Surveys, Polls and various other Market
PR> Reseach areas. This is sheldom a case in interaction design. This
PR> profession is driven by processes, artifacts and creative persuits,
PR> where margin of error for any decision doesn't matter.

We also rarely have the luxury (or necessity?) to test on enough
subjects to apply any meaningful statistics. In the past three years,
I haven't tested on more than 30 users in a single test group. Even
when the number of subjects was above 24 (the minimum recommended for
p.05), statistical acrobatics seemed overkill. Neither the design
team, nor the client needed it.

This is not to say I don't see the value in statistical analysis, I
just don't find the need for it in my design practice very often.

Steve, what was the rationale for stat analysis among the stat-savvy
designers you've talked to? What questions do you address when
applying the stat techniques you mentioned?

Lada

Syndicate content Get the feed