Hypotheses about tasks

17 Aug 2006 - 4:16pm
8 years ago
22 replies
567 reads
Robert Hoekman, Jr.
2005

OK - so after conversations with several people, all of whom I've been
talking with separately, here are some hypotheses that seem to be taking
shape:

1) "The difficulty of completing a task is a function of the number of steps
involved in the task and the time it takes to perform each step."

2) "The time it takes to perform each step in a task is a function of the
relative complexity of the step and a person's willingness and/or ability to
understand and complete the step."

3) "The more difficult a task, the less likely it is to be completed."

I think this is worth exploring, and apparently several other people do as
well, so I wanted to start a new thread to keep in all going in one place
instead of managing several different threads.

The wording of these statements is the key source of debate so far. Mainly,
I'm not sure "difficulty" is the right word, but "cognitive load" is a
little too technical. I'm erring on the side of simplicity.

Thoughts?

-r-

Comments

17 Aug 2006 - 4:48pm
Peter Bagnall
2003

Robert,

I like the idea, but I think you're going to find this quite
difficult as you've got it phrased.

1) Difficulty is going to be quite hard to define precisely. To test
this hypothesis you're going to need a strict definition that you can
measure in some way. Perhaps the simplest would be look at users
perceptions of difficulty (say using a questionnaire), or success
rate as an indicator. I'd also tend to measure the effect of number
of steps and time per step separately - you'll learn more that way
than you will by conflating them.

2) Again, definitions are the hardest bit. Time is nice and simple,
so that's great, but how are you going to measure relative
complexity? Likewise willingness to solve the problem is also going
to be pretty hard to measure. The greater the error on measuring it
the larger the number of test subjects you'll need to get a
meaningful result.

3) It could be argued that this is a definition of difficulty (all
other things being equal).

So generally I would say you need to simply your hypothesis - just
try to test the relationship between two variables at a time, one of
which you control. Clearly your users will vary in many ways, but you
get round this my having a number of people and using the stats to
get a sense of the variation - continue until you have high enough
confidence that your result is not just luck!

The other thing is definitions. A hypothesis is not much use unless
you can actually measure the things it relates to. Measuring things
like "difficulty" is hard, because difficulty is not a simple concept
- many things contribute to it, and something one person finds
difficult may be easy to someone else - so you need to be very clear
what you mean by it if it's going to be useful. You'll probably have
to go for something that is a component of difficulty otherwise
you'll have to invent some aggregate of the various factors which
will be very hard to justify.

As an example, I did an experiment on the effects of "complexity" of
software on "ease of use". I measured complexity as the number of
available functions in the system (I used Word in it's default form
and with most of its functions removed). I measured "ease of use" by
assuming that people would complete faster with something that was
easier to use. That's not a perfect definition of course, but I was
very clear that's what I was measuring, so the result had some
meaning - and I found a weak effect, although I would have needed to
test with more people to be really confident of it. I think I had
about a 10% chance of my result being sheer fluke, which is
indicative, but hardly proof. Also I had no illusions that number of
functions is the only thing that contributes to complexity, but it
was one factor that I could measure.

At the end of the day you have to be clear about what you measure,
and then what inferences you draw. If you measure number of functions
as your way of getting a handle on complexity you can't then
generalise it to other types of complexity (say, menu depth).

Hope that's helpful - I'll be very interested to see your results
from this. Hoekman's law here we come ;-)

Cheers
--Pete

On 17 Aug 2006, at 23:16, Robert Hoekman, Jr. wrote:

> [Please voluntarily trim replies to include only relevant quoted
> material.]
>
> OK - so after conversations with several people, all of whom I've been
> talking with separately, here are some hypotheses that seem to be
> taking
> shape:
>
> 1) "The difficulty of completing a task is a function of the number
> of steps
> involved in the task and the time it takes to perform each step."
>
> 2) "The time it takes to perform each step in a task is a function
> of the
> relative complexity of the step and a person's willingness and/or
> ability to
> understand and complete the step."
>
> 3) "The more difficult a task, the less likely it is to be completed."
>
> I think this is worth exploring, and apparently several other
> people do as
> well, so I wanted to start a new thread to keep in all going in one
> place
> instead of managing several different threads.
>
> The wording of these statements is the key source of debate so far.
> Mainly,
> I'm not sure "difficulty" is the right word, but "cognitive load" is a
> little too technical. I'm erring on the side of simplicity.
>
> Thoughts?
>
> -r-
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> List Guidelines ............ http://listguide.ixda.org/
> List Help .................. http://listhelp.ixda.org/
> (Un)Subscription Options ... http://subscription-options.ixda.org/
> Announcements List ......... http://subscribe-announce.ixda.org/
> Questions .................. lists at ixda.org
> Home ....................... http://ixda.org/
> Resource Library ........... http://resources.ixda.org
>

--------------------------------------------------
A society grows great when old men plant trees whose shade
they know they shall never sit in.
- Greek proverb

Peter Bagnall - http://people.surfaceeffect.com/pete/

17 Aug 2006 - 6:26pm
Robert Hoekman, Jr.
2005

Great points - thanks. Honestly, I'm not sure I can measure these statements
in that way unless I do what you suggested and put a very finite definition
on "difficulty", which would be quite difficult to do in itself.

Again, the original observation is that the further away a user gets from
the starting point of a task, the more difficult it is to complete the task.
I've seen it many times - I'm sure many other people have as well. Perhaps
it's as simple as a cause-and-effect type thing. Something like "The more
difficult it is to perform each step in a task, the more difficult it is to
complete the task."

"Difficulty" is not a measurable term using traditional definitions, but
that was the observation. the very heart of it is this:

People are more likely to abandon a task that involves more complicated or
time-consuming steps than are presumed reasonable.

I'm looking for the right way to word this, because the right words will
tell me (and anyone listening) how to test it. I'm usually quite the
wordsmith, but this one is tricky.

Any suggestions?

-r-

On 8/17/06, Peter Bagnall <pete at surfaceeffect.com> wrote:
>
> Robert,
>
> I like the idea, but I think you're going to find this quite
> difficult as you've got it phrased.
>
> 1) Difficulty is going to be quite hard to define precisely. To test
> this hypothesis you're going to need a strict definition that you can
> measure in some way. Perhaps the simplest would be look at users
> perceptions of difficulty (say using a questionnaire), or success
> rate as an indicator. I'd also tend to measure the effect of number
> of steps and time per step separately - you'll learn more that way
> than you will by conflating them.
>
> 2) Again, definitions are the hardest bit. Time is nice and simple,
> so that's great, but how are you going to measure relative
> complexity? Likewise willingness to solve the problem is also going
> to be pretty hard to measure. The greater the error on measuring it
> the larger the number of test subjects you'll need to get a
> meaningful result.
>
> 3) It could be argued that this is a definition of difficulty (all
> other things being equal).
>
> So generally I would say you need to simply your hypothesis - just
> try to test the relationship between two variables at a time, one of
> which you control. Clearly your users will vary in many ways, but you
> get round this my having a number of people and using the stats to
> get a sense of the variation - continue until you have high enough
> confidence that your result is not just luck!
>
> The other thing is definitions. A hypothesis is not much use unless
> you can actually measure the things it relates to. Measuring things
> like "difficulty" is hard, because difficulty is not a simple concept
> - many things contribute to it, and something one person finds
> difficult may be easy to someone else - so you need to be very clear
> what you mean by it if it's going to be useful. You'll probably have
> to go for something that is a component of difficulty otherwise
> you'll have to invent some aggregate of the various factors which
> will be very hard to justify.
>
> As an example, I did an experiment on the effects of "complexity" of
> software on "ease of use". I measured complexity as the number of
> available functions in the system (I used Word in it's default form
> and with most of its functions removed). I measured "ease of use" by
> assuming that people would complete faster with something that was
> easier to use. That's not a perfect definition of course, but I was
> very clear that's what I was measuring, so the result had some
> meaning - and I found a weak effect, although I would have needed to
> test with more people to be really confident of it. I think I had
> about a 10% chance of my result being sheer fluke, which is
> indicative, but hardly proof. Also I had no illusions that number of
> functions is the only thing that contributes to complexity, but it
> was one factor that I could measure.
>
> At the end of the day you have to be clear about what you measure,
> and then what inferences you draw. If you measure number of functions
> as your way of getting a handle on complexity you can't then
> generalise it to other types of complexity (say, menu depth).
>
> Hope that's helpful - I'll be very interested to see your results
> from this. Hoekman's law here we come ;-)
>
> Cheers
> --Pete
>
>
> On 17 Aug 2006, at 23:16, Robert Hoekman, Jr. wrote:
>
> > [Please voluntarily trim replies to include only relevant quoted
> > material.]
> >
> > OK - so after conversations with several people, all of whom I've been
> > talking with separately, here are some hypotheses that seem to be
> > taking
> > shape:
> >
> > 1) "The difficulty of completing a task is a function of the number
> > of steps
> > involved in the task and the time it takes to perform each step."
> >
> > 2) "The time it takes to perform each step in a task is a function
> > of the
> > relative complexity of the step and a person's willingness and/or
> > ability to
> > understand and complete the step."
> >
> > 3) "The more difficult a task, the less likely it is to be completed."
> >
> > I think this is worth exploring, and apparently several other
> > people do as
> > well, so I wanted to start a new thread to keep in all going in one
> > place
> > instead of managing several different threads.
> >
> > The wording of these statements is the key source of debate so far.
> > Mainly,
> > I'm not sure "difficulty" is the right word, but "cognitive load" is a
> > little too technical. I'm erring on the side of simplicity.
> >
> > Thoughts?
> >
> > -r-
> > ________________________________________________________________
> > Welcome to the Interaction Design Association (IxDA)!
> > To post to this list ....... discuss at ixda.org
> > List Guidelines ............ http://listguide.ixda.org/
> > List Help .................. http://listhelp.ixda.org/
> > (Un)Subscription Options ... http://subscription-options.ixda.org/
> > Announcements List ......... http://subscribe-announce.ixda.org/
> > Questions .................. lists at ixda.org
> > Home ....................... http://ixda.org/
> > Resource Library ........... http://resources.ixda.org
> >
>
> --------------------------------------------------
> A society grows great when old men plant trees whose shade
> they know they shall never sit in.
> - Greek proverb
>
> Peter Bagnall - http://people.surfaceeffect.com/pete/
>
>
>

17 Aug 2006 - 8:18pm
Robert Hoekman, Jr.
2005

You know, the more I think about this, the less issue I find with the
original statement, which was:

"The time it takes to complete a task is a function of the number of
steps involved in the task and the relative complexity of each step."

Time is measurable, and so is complexity (the number of sub-operations
required by each step). And although the original observation was
about the frustration level of users as steps got more cumbersome
while trying to complete a task, this statement doesn't need to
reflect that. It only needs to state a fact that can be used to guide
our decisions.

One person, offlist, argued that the time it takes to complete a task
also has to do with how quickly somone can perform each one. But
that's true of Fitts' Law as well. It doesn't account for how quickly
a particular person moves a mouse across the screen to ht the target,
only that the time it takes is a function of the size and distance of
the target. What's great about Fitts' Law is what we infer from it,
which is that targets that are big and/or close can be hit more
quickly (and probably with less mental work).

This hypothesis works the same way. Even if an individual memorizes
the sequence of operations and can perform them faster than any person
on the planet, the time it takes is *still* a function of the number
of steps and the complexity of each one. And what we can infer from it
is that shorter, less complex steps add up to a task that takes less
time to complete. This can be applied in a myriad of practical ways,
such as using it as a guide for how to design task flows in apps once
you've decided which tasks are the most important.

Arguments?

(You know, even if this doesn't amount to anything in the end, it's a
great discussion. That said, I think this could really lead to
something concrete.)

-r-

18 Aug 2006 - 4:56am
Juan Lanus
2005

I like Pete's idea about measuring "complexity". With a twist.

IMO users do not have trouble in "doing" steps, like for example entering data.
Obviously the bigger the data, the longer the time. But there is no
hassle with data entry.

The problem is when they have to take decisions. For example it's
easier to order in a McDonald's than in the sophisticated French
restaurant. Mac is "more usable" because there are less options, a
less stressing menu. Less factors to consider.

The so-called "wizards" make difficult tasks simpler by dosing the
decisions, one at a time. Instead of showing a form with many fields
involving many decisions.

- - disgression - -
We are talking about two different "difficulty" numbers.
One is "a priori", an evaluation of how difficult a task will be when
performed.
The other is "ex post": how much difficulty the users found when they
performed the task.
The goal is to be able to predict the second one with a formula based
on task definition information. This seems difficult bacause of the
number of independent variables involved, of which the variability of
user minds is out of control.
Hence it might happen that measures of task difficulty will be like
usability measures: always relative.
Althought usability is defined as a measure I doubt it will be ever be
measurable with the methods of today, user testing. Maybe in the
future with an automated method ... this thread could be the seed.
Bacause the "task difficulty" measure Robert presented us is IMO a
brick in the wall, being the wall the measure of the usability of the
whole system.
- - - -
Another point: not only the time but also the error rate should be
taken into account when measuring task dificulty. For example, the
time measure should be "time to perform the task correctly", which
means discarding wrong cases. But the proportion of wrong cases is
significative so it shouldn't be discarded so easy.
--
Juan

18 Aug 2006 - 5:58am
McCarthy, Ann Marie
2006

Robert,

If you're debating words, have you considered positive language like

1) "Success in completing a task is a function of the number of steps
involved and the time it takes to perform each step."

And

3) "The simpler a task the more likely completed."

Ann Marie

-----Original Message-----
From: discuss-bounces at lists.interactiondesigners.com
[mailto:discuss-bounces at lists.interactiondesigners.com] On Behalf Of
Robert Hoekman, Jr.
Sent: Thursday, August 17, 2006 6:16 PM
To: discuss
Subject: [IxDA Discuss] Hypotheses about tasks

[Please voluntarily trim replies to include only relevant quoted
material.]

OK - so after conversations with several people, all of whom I've been
talking with separately, here are some hypotheses that seem to be taking
shape:

1) "The difficulty of completing a task is a function of the number of
steps involved in the task and the time it takes to perform each step."

2) "The time it takes to perform each step in a task is a function of
the relative complexity of the step and a person's willingness and/or
ability to understand and complete the step."

3) "The more difficult a task, the less likely it is to be completed."

I think this is worth exploring, and apparently several other people do
as well, so I wanted to start a new thread to keep in all going in one
place instead of managing several different threads.

The wording of these statements is the key source of debate so far.
Mainly, I'm not sure "difficulty" is the right word, but "cognitive
load" is a little too technical. I'm erring on the side of simplicity.

Thoughts?

-r-
________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... discuss at ixda.org List Guidelines
............ http://listguide.ixda.org/ List Help ..................
http://listhelp.ixda.org/ (Un)Subscription Options ...
http://subscription-options.ixda.org/
Announcements List ......... http://subscribe-announce.ixda.org/
Questions .................. lists at ixda.org Home .......................
http://ixda.org/ Resource Library ........... http://resources.ixda.org

18 Aug 2006 - 6:24am
Michael Albers
2005

>Time is measurable, and so is complexity (the number of sub-operations
>required by each step). And although the original observation was
>about the frustration level of users as steps got more cumbersome
>while trying to complete a task, this statement doesn't need to
>reflect that. It only needs to state a fact that can be used to guide
>our decisions.

But simply counting the number of sub-operations will not quite do
it. It depends on how difficult those actions are perform
themselves. Many (probably most) are simply response actions which a
normal users does without really thinking about them. For example,
open a particular program...click Start, find folder, etc. But the
person knows them and does them more or less on auto.

But some of those sub-operations require decision making. It's the
decision points, how many of them exist, and how many options at each
one which determine the complexity of the task.

People can perform long stings of actions on auto with little
problem. Toss in a couple of decision points and things get hairy.

A few posts a ago, cognitive load was called too technical. But I
think that, in fact, cognitive load is what causes the problem and
what's needs to be measured. Decision points can overload a person;
that is when they make errors and get confused.

Mike

-------------------------------
Dr. Michael J. Albers
Professional Writing Program
Department of English
University of Memphis
Memphis TN 38152

18 Aug 2006 - 9:08am
Barbara Ballard
2005

On 8/17/06, Robert Hoekman, Jr. <rhoekmanjr at gmail.com> wrote:
>
> 1) "The difficulty of completing a task is a function of the number of steps
> involved in the task and the time it takes to perform each step."

It may be a function, but only partially a function. And what
constitutes a "step"? On a scroll-and-select device, is "scroll to
the item in the list" a step? Or should the "and select it" be added?
This is why I liked creating a new notion of "distance", because
cognitive load does not increase noticeably with number of clicks, but
depends on type of click (or rather, decision making behind the
click).

--
Barbara Ballard
barbara at littlespringsdesign.com 1-785-550-3650

18 Aug 2006 - 11:04am
Robert Hoekman, Jr.
2005

I really believe that "relative complexity" encapsulates the difficulty of
making decisions. And the fact is, I don't think any law can state that
users become more "frustrated", because that's a personal thing and will
vary so much from person to person that it's impossible to state as fact.

In support of what you said, I agree that usability can't really be measured
accurately with usability testing alone. I'm not sure the whole thing could
ever really be automated, but this thread could certainly lead to another
step in forming measurable results that can be cited, used to back up
results, used to guide designs, etc. And that can't be a bad thing.

-r-

On 8/18/06, Juan Lanus <juan.lanus at gmail.com> wrote:
>
> [Please voluntarily trim replies to include only relevant quoted
> material.]
>
> I like Pete's idea about measuring "complexity". With a twist.
>
> IMO users do not have trouble in "doing" steps, like for example entering
> data.
> Obviously the bigger the data, the longer the time. But there is no
> hassle with data entry.
>
> The problem is when they have to take decisions. For example it's
> easier to order in a McDonald's than in the sophisticated French
> restaurant. Mac is "more usable" because there are less options, a
> less stressing menu. Less factors to consider.
>
> The so-called "wizards" make difficult tasks simpler by dosing the
> decisions, one at a time. Instead of showing a form with many fields
> involving many decisions.
>
> - - disgression - -
> We are talking about two different "difficulty" numbers.
> One is "a priori", an evaluation of how difficult a task will be when
> performed.
> The other is "ex post": how much difficulty the users found when they
> performed the task.
> The goal is to be able to predict the second one with a formula based
> on task definition information. This seems difficult bacause of the
> number of independent variables involved, of which the variability of
> user minds is out of control.
> Hence it might happen that measures of task difficulty will be like
> usability measures: always relative.
> Althought usability is defined as a measure I doubt it will be ever be
> measurable with the methods of today, user testing. Maybe in the
> future with an automated method ... this thread could be the seed.
> Bacause the "task difficulty" measure Robert presented us is IMO a
> brick in the wall, being the wall the measure of the usability of the
> whole system.
> - - - -
> Another point: not only the time but also the error rate should be
> taken into account when measuring task dificulty. For example, the
> time measure should be "time to perform the task correctly", which
> means discarding wrong cases. But the proportion of wrong cases is
> significative so it shouldn't be discarded so easy.
> --
> Juan
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> List Guidelines ............ http://listguide.ixda.org/
> List Help .................. http://listhelp.ixda.org/
> (Un)Subscription Options ... http://subscription-options.ixda.org/
> Announcements List ......... http://subscribe-announce.ixda.org/
> Questions .................. lists at ixda.org
> Home ....................... http://ixda.org/
> Resource Library ........... http://resources.ixda.org
>

18 Aug 2006 - 11:09am
Robert Hoekman, Jr.
2005

Interesting point.

I'm not sure it matters that we define "step" (but please debate this). A
step could be as broad as "go to Google.com" or as low-level as "click the
browser icon to open the browser". Each step still has a level of
complexity, and as long as all tasks being analyzed are measured using the
same level of granularity, the results are valid.

Arguments?

-r-

On 8/18/06, Barbara Ballard <barbara at littlespringsdesign.com> wrote:
>
> On 8/17/06, Robert Hoekman, Jr. <rhoekmanjr at gmail.com> wrote:
> >
> > 1) "The difficulty of completing a task is a function of the number of
> steps
> > involved in the task and the time it takes to perform each step."
>
> It may be a function, but only partially a function. And what
> constitutes a "step"? On a scroll-and-select device, is "scroll to
> the item in the list" a step? Or should the "and select it" be added?
> This is why I liked creating a new notion of "distance", because
> cognitive load does not increase noticeably with number of clicks, but
> depends on type of click (or rather, decision making behind the
> click).
>
>
> --
> Barbara Ballard
> barbara at littlespringsdesign.com 1-785-550-3650
>

18 Aug 2006 - 12:32pm
Peter Bagnall
2003

Wow, this thread is really getting meaty!

Robert:
> You know, the more I think about this, the less issue I find with the
> original statement, which was:
>
> "The time it takes to complete a task is a function of the number of
> steps involved in the task and the relative complexity of each step."

I'd tend to agree, but I' wonder if it could be split into two
hypotheses. One testing the number of steps and the other testing the
complexity of each step. It's a little awkward to do but otherwise
you risk the two effects cancelling each other out and finding
nothing at all. I would assume more steps is bad, and greater
complexity is also bad. So the obvious test would be to come up with
a task and design it as a multi-step process where the each step is
simple versus a single step process which is very complex. But as I
say, I think you may find the effects cancel out to some extent which
could hide interesting effects.

Another approach would be to take a very long task, and just present
a certain number of steps to various people. So some people get 5
steps, and some people get 10, and some get 15 for example. And then
see how their performance differs. To be fair you'd have to mix up
the order of the steps probably to make sure it wasn't the
differences between the steps that you were measuring, but simply the
length of the process. In this way you'd be holding complexity per
step constant while varying number of steps. Clearly you'd expect 15
steps to take longer (and have a higher failure rate) than 5, but the
interesting question is by how much? What's the pattern? Two
possibilities spring to mind. Either it is a log law - a certain
proportion give up per step - or it has no effect - the people who
manage to get past 5 steps will get past any number of steps!

You could something analogous with the complexity of each step.

I notice a few people have been worrying about individual performance
differences. The way you deal with that is just use a larger number
of test subjects. Lets say you test 30 people (a good starting
number, although you may well need more). One experimental design
(called between subjects) would be to have 10 do the 5 step trial, 10
do the 10 step and 10 do the 15 step. Now, you look at the variation
within each group of 10 and if that is large compared to the variance
between the 3 groups you're in trouble and you need more subjects.
This is because if the variation between people in the same test is
high, then it will be high between groups just based on who is in
which group, and that could hide any effect you're looking for. I'm
hand waving here over the stats, but that's the basic concept.

Another way of doing it (called within subjects) is to get all test
subjects to try 5, 10 and 15 step processes. But then you have be
careful about what order they do them, because they may learn how to
do the trials as they go, so they'll probably perform better at the
last one than the first. There are 6 ways of arranging three trials,
so you'd have 5 people do each arrangement. Again, you'd have to look
for differences in performance for that and see if it was biasing
your results. The upside is that because everyone does all the tests
individual differences are cancelled out (well, somewhat at least).
The stats will be able to disentangle much of the results later and
tell you how much is due to learning between trials and how much is
due to the difference in the trials themselves - clearly the stats
get more accurate as you add more subjects.

Of course you have to be careful about your test subjects, if you
have a range of ability and you stick all the most able in one
category then you'll be in trouble. Typically you assign them
randomly and increase the number of subjects if the stats tell you
that you've got a bias, but there are more subtle ways of doing it.
(Pollsters for example make sure they cover as much of the
demographic range as possible and then adjust the weightings to match
the actual population mix - but this gets complex fast).

On the words for the hypothesis, I'd actually not worry to much about
the words for now. Instead I'd worry about how you're going to design
the experiment to tease out the thing you're looking for. You may
find the form of words falls out of that. Since your final definition
will be pretty precise it's probably not going to be a single word or
even a short phrase. It's fine to talk informally about "we're
looking at the effects of the number of steps" but when you get into
the detail you'll have to define what constitutes a step - and that
definition is actually part of the hypothesis. The definition may
actually be embedded in the design of the experiment you do. So, for
example, if you create a wizard for the experiment then each step
could be a page of the wizard.

Once you've done all the work it will only be applicable to that
definition of "step". (That's the down side!).

To Barbara's point, yes, I'd agree that if you find some function it
will only be a partial description, in that it won't include any
factors which you've held constant (or balanced between your
subjects). But that's ok, it's still useful, even if it's not
complete. In fact, if you try to get it complete you'll never get
anywhere, since it would mean testing a billion people, and that may
take a wee while. Age, education, IQ, dexterity could all play a
part, but you hope (and the stats will tell you if not) that you get
a reasonably balanced bunch of test subjects across your various
conditions.

At this point I should probably say if there are any psychologists
lurking out there I'd welcome any corrections to all this. I'm not a
psychologist myself, although I have done a small amount of this sort
of thing. I would imagine the average psychologist could probably
improve on what I've said!

Cheers
--Pete

On 18 Aug 2006, at 18:09, Robert Hoekman, Jr. wrote:

> [Please voluntarily trim replies to include only relevant quoted
> material.]
>
> Interesting point.
>
> I'm not sure it matters that we define "step" (but please debate
> this). A
> step could be as broad as "go to Google.com" or as low-level as
> "click the
> browser icon to open the browser". Each step still has a level of
> complexity, and as long as all tasks being analyzed are measured
> using the
> same level of granularity, the results are valid.
>
> Arguments?
>
> -r-
>
>
> On 8/18/06, Barbara Ballard <barbara at littlespringsdesign.com> wrote:
>>
>> On 8/17/06, Robert Hoekman, Jr. <rhoekmanjr at gmail.com> wrote:
>>>
>>> 1) "The difficulty of completing a task is a function of the
>>> number of
>> steps
>>> involved in the task and the time it takes to perform each step."
>>
>> It may be a function, but only partially a function. And what
>> constitutes a "step"? On a scroll-and-select device, is "scroll to
>> the item in the list" a step? Or should the "and select it" be
>> added?
>> This is why I liked creating a new notion of "distance", because
>> cognitive load does not increase noticeably with number of clicks,
>> but
>> depends on type of click (or rather, decision making behind the
>> click).
>>
>>
>> --
>> Barbara Ballard
>> barbara at littlespringsdesign.com 1-785-550-3650
>>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> List Guidelines ............ http://listguide.ixda.org/
> List Help .................. http://listhelp.ixda.org/
> (Un)Subscription Options ... http://subscription-options.ixda.org/
> Announcements List ......... http://subscribe-announce.ixda.org/
> Questions .................. lists at ixda.org
> Home ....................... http://ixda.org/
> Resource Library ........... http://resources.ixda.org
>

----------------------------------------------------------
The power of the executive to cast a man into prison without formulating
any charge known to the law, particularly to deny him the judgement of
his peers, is in the highest degree odious, and the foundation of all
totalitarian government whether Nazi or Communist."
-Winston Churchill, 1943

Peter Bagnall - http://people.surfaceeffect.com/pete/

18 Aug 2006 - 1:16pm
Barbara Ballard
2005

On 8/18/06, Robert Hoekman, Jr. <rhoekmanjr at gmail.com> wrote:
> Interesting point.
>
> I'm not sure it matters that we define "step" (but please debate this). A
> step could be as broad as "go to Google.com" or as low-level as "click the
> browser icon to open the browser". Each step still has a level of
> complexity, and as long as all tasks being analyzed are measured using the
> same level of granularity, the results are valid.
>

Using your model of difficulty as a function of steps, Yes.

A step in a process, to my mind, includes a decision and some set of
actions to achieve that decision. Certainly each step has some
complexity, and difficulty is an unspecified function of number of
steps.

On the other hand, if step = click, which was my original proposal for
determining distance, then I would have to say that scrolling from KS
to KY on my way to NY in a list does not particularly affect
difficulty. Time, yes. Annoyance, yes. Complexity, no. Difficulty,
only in an uninteresting amount.

--
Barbara Ballard
barbara at littlespringsdesign.com 1-785-550-3650

18 Aug 2006 - 1:23pm
Dave Malouf
2005

I must admit, that I'm coming in, in the middle of all this and well I
may have missed something.
So far I have read that there is a corollary being developed that sounds
quite interesting about task success being some by-product of difficulty
of task x number of tasks.

What I haven't noticed so far is the concept of increasing success by
increasing engagement.
Something can be REALLY difficult (think games), but the task is so
engaging that it is worth the user's time to proceed through any number
of difficult tasks in order to head towards the motivational goal.

Expanding on this a bit, and bringing in the concept of "scent" a bit
(ala the work from Jared Spool and the rest of UIE's team), a difficult
task that feels like it is leading in the right direction towards my
goal will be tolerated and thus have limited impedance against the
overall success of a flow of tasks towards a final goal.

Sorry if this is redundant, but to me the real thing to look at is not
just negativity, but also positivity. We can push and we can pull people
towards success.

-- dave

18 Aug 2006 - 1:26pm
Robert Hoekman, Jr.
2005

I stopped trying to measure difficulty a few posts back. It was too ...
well, difficult.

If we focus on time and complexity, we can get somewhere real, I think. Time
is easily measurable, and complexity, if given an agreeable definition, is
as well.

-r-

On 8/18/06, Barbara Ballard <barbara at littlespringsdesign.com> wrote:
>
> On 8/18/06, Robert Hoekman, Jr. <rhoekmanjr at gmail.com> wrote:
> > Interesting point.
> >
> > I'm not sure it matters that we define "step" (but please debate this).
> A
> > step could be as broad as "go to Google.com" or as low-level as "click
> the
> > browser icon to open the browser". Each step still has a level of
> > complexity, and as long as all tasks being analyzed are measured using
> the
> > same level of granularity, the results are valid.
> >
>
> Using your model of difficulty as a function of steps, Yes.
>
> A step in a process, to my mind, includes a decision and some set of
> actions to achieve that decision. Certainly each step has some
> complexity, and difficulty is an unspecified function of number of
> steps.
>
> On the other hand, if step = click, which was my original proposal for
> determining distance, then I would have to say that scrolling from KS
> to KY on my way to NY in a list does not particularly affect
> difficulty. Time, yes. Annoyance, yes. Complexity, no. Difficulty,
> only in an uninteresting amount.
>
> --
> Barbara Ballard
> barbara at littlespringsdesign.com 1-785-550-3650
>

18 Aug 2006 - 1:36pm
Robert Hoekman, Jr.
2005

The statement currently being debated is this:

"The time it takes to complete a task is a function of the number of
steps involved in the task and the relative complexity of each step."

I'm aiming to capture measurable variables, not necessarily the
success/failure or positive/negative.

-r-

On 8/18/06, Dave (Heller) Malouf <dave at ixda.org> wrote:
>
> I must admit, that I'm coming in, in the middle of all this and well I
> may have missed something.
> So far I have read that there is a corollary being developed that sounds
> quite interesting about task success being some by-product of difficulty
> of task x number of tasks.
>
> What I haven't noticed so far is the concept of increasing success by
> increasing engagement.
> Something can be REALLY difficult (think games), but the task is so
> engaging that it is worth the user's time to proceed through any number
> of difficult tasks in order to head towards the motivational goal.
>
> Expanding on this a bit, and bringing in the concept of "scent" a bit
> (ala the work from Jared Spool and the rest of UIE's team), a difficult
> task that feels like it is leading in the right direction towards my
> goal will be tolerated and thus have limited impedance against the
> overall success of a flow of tasks towards a final goal.
>
> Sorry if this is redundant, but to me the real thing to look at is not
> just negativity, but also positivity. We can push and we can pull people
> towards success.
>
> -- dave
>

18 Aug 2006 - 5:21pm
Robert Hoekman, Jr.
2005

> > "The time it takes to complete a task is a function of the number of
> > steps involved in the task and the relative complexity of each step."
>
> I'd tend to agree, but I' wonder if it could be split into two
> hypotheses. One testing the number of steps and the other testing the
> complexity of each step. It's a little awkward to do but otherwise
> you risk the two effects cancelling each other out and finding
> nothing at all.

Interesting idea. Let's look at that:

If I said the time it takes to complete a task is a function of the number
of steps involved in the task, some of us would say "duh!" and others would
say I wasn't accounting for the user's mindset, decisions, etc. (Again,
Fitts didn't account for these things either, so I'm not sure this needs to
be addressed.)

The same thing would happen if I said the time it takes to complete a task
is a function of the relative complexity of each step in the task. Providing
we agree on the definition of "complexity", these are both almost
cause-and-effect type statements. (There were a lot of steps, so it took
longer to complete the task.)

Maybe there is merit in splitting them up, because they do seem easier to
test this way, and the two results, if both were proven true, could be
combined to make the single statement I originally made. *However*, the
original statement said "number of steps involved in the task AND the
relative complexity of each step". The "and" is important. Seems like
splitting this into two hypothesese would cancel out that connection.

Thoughts?

-r-

18 Aug 2006 - 5:45pm
Robert Hoekman, Jr.
2005

What about this statement? (Thinking out loud here ...)

"The time it takes to correctly complete a task is a function of the number
of steps involved in the task and the subject's ability to accurately
perform each step."

This implies that "ability" must be measured. What I mean is that the time
it takes to correctly complete each step affects the time it takes to
correctly complete the task. That's obvious. So "ability" is measured by
whether or not the subject does, in fact, complete each step correctly.

This does a better job of accounting for the subject's mental state,
decision-making process, etc, and removes the need to define "complexity".

Of course, this also introduces the idea of completing the task "correctly",
which is more inline with the original observation, but further complicates
the testing.

Thoughts?

-r-

18 Aug 2006 - 6:41pm
Peter Bagnall
2003

On 19 Aug 2006, at 00:21, Robert Hoekman, Jr. wrote:
> If I said the time it takes to complete a task is a function of the
> number of steps involved in the task, some of us would say "duh!"
> and others would say I wasn't accounting for the user's mindset,
> decisions, etc. (Again, Fitts didn't account for these things
> either, so I'm not sure this needs to be addressed.)

Absolutely it's a "duh!" that time going to increase as number of
steps increases - but how? Is it linear, is it better than linear or
worse? The relative complexity is more interesting in that regard
certainly - it's less obvious how that will impact time. Number of
steps is probably going to be pretty close to linear I would imagine
(all not everything that's obvious is true ;-) ). But if you double
the complexity I have no idea what happens to the time taken, does it
double, quadruple, increase by log(2). That would be very interesting
to see. My own little experiment suggested that only substantial
changes in number of features has any effect, suggesting it's a power
or log relationship - but I never had enough data to get anywhere
near that precise.

As for accounting for users mindset - no you're not, this is a
limited model, but you can't do everything! Doesn't mean that finding
this out isn't useful.

> The same thing would happen if I said the time it takes to complete
> a task is a function of the relative complexity of each step in the
> task. Providing we agree on the definition of "complexity", these
> are both almost cause-and-effect type statements. (There were a lot
> of steps, so it took longer to complete the task.)

But it's the degree of impact that's interesting - I don't think
anyone doubts the direction of the effect, but the strength of the
effect is worth looking at. Especially linearity (or otherwise).

> Maybe there is merit in splitting them up, because they do seem
> easier to test this way, and the two results, if both were proven
> true, could be combined to make the single statement I originally
> made. *However*, the original statement said "number of steps
> involved in the task AND the relative complexity of each step".
> The "and" is important. Seems like splitting this into two
> hypothesese would cancel out that connection.

I'm not sure I follow why splitting it would get rid of the
connection - you'd not be changing the type of tests you'd be giving
people, just changing how you controlled the tests. Am I missing
something here?

The problem if you test them together is you wouldn't know which one
was responsible for any effect you find. For example - having lots of
steps might not affect success rate, but complexity might. If you put
the two together you may never find that out.

Cheers
--Pete

----------------------------------------------------------
Simplicity is the ultimate sophistication.
- Leonardo de Vinci, 1452 - 1519

Peter Bagnall - http://people.surfaceeffect.com/pete/

18 Aug 2006 - 8:34pm
dszuc
2005

Hi Folks:

Other factors ... Does it also depend on the task? Will users be willing to
spend more time to complete the task, depending on a) the task b) other
alternatives to completing that task e.g. phone, ask colleague, go to shop
etc.

Another factor may also be *download time* -
http://www.uie.com/articles/download_time/

Rgds,

Daniel Szuc
Principal Usability Consultant
Apogee Usability Asia Ltd
www.apogeehk.com
'Usability in Asia'

-----Original Message-----
From: discuss-bounces at lists.interactiondesigners.com
[mailto:discuss-bounces at lists.interactiondesigners.com] On Behalf Of Robert
Hoekman, Jr.
Sent: Saturday, August 19, 2006 7:45 AM
To: Peter Bagnall
Cc: Barbara Ballard; discuss
Subject: Re: [IxDA Discuss] Hypotheses about tasks

[Please voluntarily trim replies to include only relevant quoted material.]

What about this statement? (Thinking out loud here ...)

"The time it takes to correctly complete a task is a function of the number
of steps involved in the task and the subject's ability to accurately
perform each step."

This implies that "ability" must be measured. What I mean is that the time
it takes to correctly complete each step affects the time it takes to
correctly complete the task. That's obvious. So "ability" is measured by
whether or not the subject does, in fact, complete each step correctly.

This does a better job of accounting for the subject's mental state,
decision-making process, etc, and removes the need to define "complexity".

Of course, this also introduces the idea of completing the task "correctly",
which is more inline with the original observation, but further complicates
the testing.

Thoughts?

-r- ________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... discuss at ixda.org
List Guidelines ............ http://listguide.ixda.org/
List Help .................. http://listhelp.ixda.org/ (Un)Subscription
Options ... http://subscription-options.ixda.org/
Announcements List ......... http://subscribe-announce.ixda.org/
Questions .................. lists at ixda.org
Home ....................... http://ixda.org/
Resource Library ........... http://resources.ixda.org

18 Aug 2006 - 9:24pm
Dave Malouf
2005

> Other factors ... Does it also depend on the task? Will users
> be willing to
> spend more time to complete the task, depending on a) the
> task b) other
> alternatives to completing that task e.g. phone, ask
> colleague, go to shop
> etc.

I was thinking along the same lines.
One thing is enjoyment. A task might take longer by choice.
Watching a movie, listening or talking, playing

This alludes to the point I was making before.

Robert, the question you are trying ask with this statement feels as if you
are saying "time = bad" or even "complexity = bad" and I don't think either
statement is true.

I'm also not sure of the value of measurement. Never really been all that
interested in measuring anything other than perception of success by the
end-user and the business.

-- dave

19 Aug 2006 - 11:48am
Robert Hoekman, Jr.
2005

> I was thinking along the same lines.
> One thing is enjoyment. A task might take longer by choice.
> Watching a movie, listening or talking, playing

Sure - but again, I can't really measure enjoyability in a way that
gives us a concrete fact because it will vary from person to person.

I've been thinking about "rate". As in, the time it takes to complete
a task is a function of the number of operations involved the task and
the rate at which the subject completes each operation. Or, the rate
at which the subject CAN complete the operations (software can fail).

> Robert, the question you are trying ask with this statement feels as if you
> are saying "time = bad" or even "complexity = bad" and I don't think either
> statement is true.

I am using time as a measure of difficulty, which may not be right.
But I'm not necessarily saying that complexity is a bad thing. It can
be in a purpose-driven task, but not in an enjoyment-driven task.
DonnieDarko.com takes forever to get through, but that's because it's
meant as a a journey and not a get-in-get-out interaction.

> I'm also not sure of the value of measurement. Never really been all that
> interested in measuring anything other than perception of success by the
> end-user and the business.

I keep going back to this myself. If there is a way to measure
"difficulty", I'm interested in proceeding in that direction, because
the observation was that task got more difficult as user's got further
away from the starting point of the task. More steps, more complicated
steps, more decisions, etc, all lead to more "difficulty".

The question with this angle is about how to prove it, because it only
applies if the interaction is a get-in-get-out type task. So is it
complexity I'm trying to measure? Is is that the complexity of a task
is defined by the number of operations and the complexity of each
operation?

There's obviously something here, or this thread wouldn't be thriving
like it is. I'm just not sure exactly how to turn the observation into
something provable and reliable that can be used as a guide in future
work.

-r-

19 Aug 2006 - 12:16pm
Robert Hoekman, Jr.
2005

The thing that keeps tugging at my brain is that even if this is not
measurable in any scientfitic way, it's still a useful statement and
can still be trusted and used to guide the design of task flows.

Does the statement really have to be measured scientifically to become a law?

Parkinson's Law that "work expands so as to fill the time available
for its completion" is not necessarily provable, but we all nod our
heads and say, "Yup - it sure does." It's not a scientific law, but
it's still called a Law. Why is that?

Also, the fact that my original statement excludes a user's mental
state and such as factors doesn't make it less true.

For example, Hick's Law that "the time to choose between a number of
alternative targets is a function of the number of targets and is
related logarithmically" doesn't account for a user's ability or
willingness to choose the target, but we still recognize the truth of
it.

Hick's Law covers only a sliver of the whole picture. My original
statement does the same thing. It says that the number of steps and
the complexity of the steps contributes to the time it takes to
complete a task. No one here has actually said this is untrue, only
that "steps" and "tasks" and "complexity" are ambiguous.

They're not really ambiguous if we rely on dictionary definitions of each term.

So, I guess we should backtrack here for a moment. What, exactly,
makes a law a law? Some people, it seems, simply decree that a
statement is a law and it's referred to as law from then on.

If this is how it's done, then my statement should be something like
"the difficulty of completing a task is a function of the number of
steps involved in the task and the relative complexity of each step."
And I could just leave it at that, because it is the truest reflection
of the original observation, and also because I can simply decree that
it's a law.

Somehow, I think many people wouldn't accept this, despite the fact
that they accept so many other "laws" regardless of their scientific
provability.

-r-

On 8/18/06, Barbara Ballard <barbara at littlespringsdesign.com> wrote:
> On 8/18/06, Robert Hoekman, Jr. <rhoekmanjr at gmail.com> wrote:
> > Interesting point.
> >
> > I'm not sure it matters that we define "step" (but please debate this). A
> > step could be as broad as "go to Google.com" or as low-level as "click the
> > browser icon to open the browser". Each step still has a level of
> > complexity, and as long as all tasks being analyzed are measured using the
> > same level of granularity, the results are valid.
> >
>
> Using your model of difficulty as a function of steps, Yes.
>
> A step in a process, to my mind, includes a decision and some set of
> actions to achieve that decision. Certainly each step has some
> complexity, and difficulty is an unspecified function of number of
> steps.
>
> On the other hand, if step = click, which was my original proposal for
> determining distance, then I would have to say that scrolling from KS
> to KY on my way to NY in a list does not particularly affect
> difficulty. Time, yes. Annoyance, yes. Complexity, no. Difficulty,
> only in an uninteresting amount.
>
> --
> Barbara Ballard
> barbara at littlespringsdesign.com 1-785-550-3650
>

21 Aug 2006 - 12:59pm
leo.frishberg a...
2005

Although I don't want to add to the increasing heat of this thread, the
impugnment of one of my early heroes, Parkinson, was enough to inspire
my response.

Parkinson's Law is exemplary of a scientific law: a statement based on
empirical observations and measurements, in other words: "a statement of
fact meant to explain, in concise terms, an action or set of actions".
That it happens to have important implications for public policy,
organizational engineering, etc. only further substantiates its
validity.

As Jarod has so kindly remarked, the pursuit of science in this area has
been on-going for dozens of years. The social sciences have been
struggling for over a century to be recognized as equally "scientific"
as the hard sciences in formulating "laws." I could argue that the
"science" of statistics was invented to bring legitimacy to the soft
science investigations - it doesn't matter that variance occurs among
individuals as long as an effect can be observed in the aggregate.

The concerns you've mentioned, Robert, about all of the variances in
"feelings", "frustrations", etc. reducing their viability as measures to
contribute to a "law" really boil down to the level of difficulty in
designing a set of experiments, not that they aren't measurable or even
important contributors to the law.

Keep up the discussion, it's been a great thread!

Leo

>-----Original Message-----
>From: discuss-bounces at lists.interactiondesigners.com
>[mailto:discuss-bounces at lists.interactiondesigners.com] On
>Behalf Of Robert Hoekman, Jr.
>Sent: Saturday, August 19, 2006 11:16 AM
>To: Barbara Ballard
>Cc: discuss
>Subject: Re: [IxDA Discuss] Hypotheses about tasks
>
>
>[Please voluntarily trim replies to include only relevant
>quoted material.]
>
>The thing that keeps tugging at my brain is that even if this is not
>measurable in any scientfitic way, it's still a useful statement and
>can still be trusted and used to guide the design of task flows.
>
>Does the statement really have to be measured scientifically
>to become a law?
>
>Parkinson's Law that "work expands so as to fill the time available
>for its completion" is not necessarily provable, but we all nod our
>heads and say, "Yup - it sure does." It's not a scientific law, but
>it's still called a Law. Why is that?
>

Syndicate content Get the feed