Eye-Tracker software/hardware recommendations

14 Aug 2009 - 4:31am
5 years ago
72 replies
2152 reads
William Hudson
2009

Kristen -

I attended a pro vs cons debate on eye-tracking a few months ago in
London. On the 'cons' side was Kara Pernice who is the MD at NNG. Her
main thrust was that eye-tracking was largely irrelevant for most
usability work, particularly given the expense.

I have since had a couple of conversations on this with heavy
eye-tracking users. Both confessed that it really was overkill for most
usability studies and that you did have to be experienced in order to
interpret the results correctly (so it wasn't something you would use
casually). On the plus side, it does some to be a good way to engage
technology-oriented clients!

Regards,

William Hudson
Syntagm Ltd
Design for Usability
UK 01235-522859
World +44-1235-522859
US Toll Free 1-866-SYNTAGM
mailto:william.hudson at syntagm.co.uk
http://www.syntagm.co.uk
skype:williamhudsonskype

Syntagm is a limited company registered in England and Wales (1985).
Registered number: 1895345. Registered office: 10 Oxford Road, Abingdon
OX14 2DS.

Confused about dates in interaction design? See our new study (free):
http://www.syntagm.co.uk/design/datesstudy.htm

12 UK mobile phone e-commerce sites compared! Buy the report:
http://www.syntagm.co.uk/design/uxbench.shtml

Courses in card sorting and Ajax interaction design. London, Las Vegas
and Berlin:
http://www.syntagm.co.uk/design/csadvances.shtml
http://www.syntagm.co.uk/design/ajaxdesign.shtml

> -----Original Message-----
> From: new-bounces at ixda.org [mailto:new-bounces at ixda.org] On Behalf Of
> Kristen
> Sent: 13 August 2009 11:23 AM
> To: discuss at ixda.org
> Subject: [IxDA Discuss] Eye-Tracker software/hardware recommendations
...

Comments

14 Aug 2009 - 7:32am
Jared M. Spool
2003

On Aug 13, 2009, at 10:23 AM, Kristen wrote:

> I am currently setting up a user research lab and am looking into
> purchasing eye-tracker software/hardware. I'm wondering what other
> labs use and the pros/cons of those systems.

I'm with William. I suggest you get a Oiuja Board instead of an eye-
tracker. It will produce exactly the same useful results, but at a
significantly less cost. New Orleans-style Voodoo Dolls and Tarot
Cards are also a good substitute. Again, the accuracy is identical
between all of them.

Jared

14 Aug 2009 - 7:48am
Joshua Porter
2007

Interesting take from Google on their use of eye trackers:

"In addition to search research, we also use eye-tracking to study the
usability of other products, such as Google News and Image Search. For
these products, eye-tracking helps us answer questions, such as "Is
the 'Top Stories' link discoverable on the left of the Google News
page?" or "How do the users typically scan the image results — in
rows, in columns or in some other way?"

Eye-tracking gives us valuable information about our users' focus of
attention — information that would be very hard to come by any other
way and that we can use to improve the design of our products.
However, in our ongoing quest to make our products more useful,
usable, and enjoyable, we always complement our eye-tracking studies
with other methods, such as interviews, field studies and live
experiments."

More here: http://googleblog.blogspot.com/2009/02/eye-tracking-studies-more-than-meets.html

On Aug 14, 2009, at 8:32 AM, Jared Spool wrote:

>
> On Aug 13, 2009, at 10:23 AM, Kristen wrote:
>
>> I am currently setting up a user research lab and am looking into
>> purchasing eye-tracker software/hardware. I'm wondering what other
>> labs use and the pros/cons of those systems.
>
> I'm with William. I suggest you get a Oiuja Board instead of an eye-
> tracker. It will produce exactly the same useful results, but at a
> significantly less cost. New Orleans-style Voodoo Dolls and Tarot
> Cards are also a good substitute. Again, the accuracy is identical
> between all of them.
>
> Jared
>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help

14 Aug 2009 - 1:23pm
Jared M. Spool
2003

On Aug 14, 2009, at 7:48 AM, Joshua Porter wrote:

> Interesting take from Google on their use of eye trackers:
>
> "In addition to search research, we also use eye-tracking to study
> the usability of other products, such as Google News and Image
> Search. For these products, eye-tracking helps us answer questions,
> such as "Is the 'Top Stories' link discoverable on the left of the
> Google News page?" or "How do the users typically scan the image
> results — in rows, in columns or in some other way?"
>
> Eye-tracking gives us valuable information about our users' focus of
> attention — information that would be very hard to come by any other
> way and that we can use to improve the design of our products.
> However, in our ongoing quest to make our products more useful,
> usable, and enjoyable, we always complement our eye-tracking studies
> with other methods, such as interviews, field studies and live
> experiments."
>
>> More here: http://googleblog.blogspot.com/2009/02/eye-tracking-studies-more-than-meets.html

Ok, may this warrants a more serious response.

The problem with Google & everyone else's use of eye tracking is that
it requires a leap of faith from observation to inference.

We can see the observation clearly and most of the time, we can agree
on them. An observation is that the user's gaze was recorded on a
specific x/y coordinate for a specific time period. Another
observation might be that the device didn't record any gaze fixations
on a different x/y coordinate.

We might also observe that the first x/y coordinate matches up with a
link to a news story. The second x/y coordinate matches up with an
advertisement.

So, we could conclude that the fixation of the user was on the news
because they were interested in it. And that they didn't look at the
ad because they weren't interested.

But that conclusion could be very flawed. Assuming we can account for
any calibration errors in the device (where the x/y coordinates didn't
actually match the news link or ad -- a frequent occurrence in state-
of-the-art eye tracking systems), we still don't know the brain
activity behind the gaze fixations.

Maybe they stared at the news link because they were completely
baffled by the headline? Maybe they didn't realize it was a news
headline and thought it was something else?

Maybe they actually saw the ad in a quick, transitive glance that was
too fast for the eye tracker to pick up? Maybe they registered the ad
out of there peripheral vision, beyond that of the foveal focus
region? (Many eye trackers won't show an experienced user's eyes
moving to scroll bar even though they move their mouse there to
scroll. It seems they acquire the scroll bar with peripheral vision,
keeping their focus on the items of interest on the screen.)

Jumping too quickly from observation to inference is the #1 cause of
design problems. We assume things without eliminating other
possibilities, make assumptions, and run with them. Spending a little
more time to test our inferences, to ensure we've properly qualified
them and eliminated alternative explanations can save a lot of energy
and downstream problems.

(I've written about this in an article called "The Road to
Recommendation": http://www.uie.com/events/roadshow/articles/recommendation/)

So, here's the problem with eye trackers: Every inference must be
tested without the eye tracker. As the folks from Google say:

> we always complement our eye-tracking studies with other methods,
> such as interviews, field studies and live experiments.

Fact is, had they started with the other methods, they wouldn't have
discovered anything new in the eye tracker. And the other methods are
cheaper, more efficient, and more beneficial.

There is one advantage to eye tracking hardware. On a recent visit to
the Googleplex, I asked about their usage their and this observation/
inference problem. They agreed with me, but told me about the "real"
reason they use the devices.

It turns out, the engineers and developers are more likely to attend
usability tests when the eye tracker is in use. In the few labs they
have that aren't outfitted with the devices, the engineers and
developers rarely attend. They line up to watch eye tracking tests.

For that purpose, the device may have some value. But so does good
chinese food. I've found that a quality catering job is much more cost
effective than mucking with the toys. (At Google, that might not work
so well, since they have four-star chefs in their cafeteria -- hard to
top that with catering.)

That's my more serious response. It pisses off eye-tracking
aficionados world-wide. I'm good with that.

Jared

Jared M. Spool
User Interface Engineering
510 Turnpike St., Suite 102, North Andover, MA 01845
e: jspool at uie.com p: +1 978 327 5561
http://uie.com Blog: http://uie.com/brainsparks Twitter: @jmspool

15 Aug 2009 - 10:11am
Will Hacker
2009

Eye-tracking is just one of many techniques, and should never be a
replacement for observation and exploration of real users'
experiences and motivations. It does require inferring what the user
was thinking about as their eye moved, while the eye movement itself
could have been caused by any number of things unrelated to the
design. Are they tired, did something in the room distract them, did
they see a word that had particular meaning to them personally? All
these could give you false readings of the effectiveness of the
design.

Eye-tracking can be useful if you are trying to make a point with
technology management, who often are impressed by what gadgets do and
think of usability testing as "soft" science. If you have some
findings you are having difficultly communicating, eye-tracking may
be a means of "proving" it to skeptical engineers.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

15 Aug 2009 - 1:52pm
Jared M. Spool
2003

On Aug 15, 2009, at 8:11 AM, Will Hacker wrote:

> Eye-tracking is just one of many techniques, and should never be a
> replacement for observation and exploration of real users'
> experiences and motivations.

I hear that.

However, its cost is so much larger than the other techniques you have
to do anyways *and* it doesn't tell you anything you can't learn from
those other techniques. So, why spend the resources to get info you
have to verify with cheaper methods anyways?

> Eye-tracking can be useful if you are trying to make a point with
> technology management, who often are impressed by what gadgets do and
> think of usability testing as "soft" science. If you have some
> findings you are having difficultly communicating, eye-tracking may
> be a means of "proving" it to skeptical engineers.

Maybe it's just me, but I never have this issue. I don't need to trick
anyone into believing that the little dot on the eye-tracker screen or
red splotch on the heat map means something it probably doesn't mean,
just because I say it means that.

User research, when done well, isn't a "science" at all. It's an
engineering tool. If you have to demonstrate its scientific validity
(and deal with the fact that the people you're working with perceive
it as a "soft science"), then you've already lost the game, in my
opinion.

They should understand why its valuable before you've invested any
resources in doing it. Otherwise, you're stuck making crap up to
support your point of view.

That's my opinion. It's worth what you paid for it.

Jared

Jared M. Spool
User Interface Engineering
510 Turnpike St., Suite 102, North Andover, MA 01845
e: jspool at uie.com p: +1 978 327 5561
http://uie.com Blog: http://uie.com/brainsparks Twitter: @jmspool

14 Aug 2009 - 8:48am
Anonymous

Thank you for your responses so far!

I have read several forums that have debated the pro's and con's of
eye-tracking in general. Personally, I am a proponent of
eye-tracking, and I am looking for others who hold the same opinion.

I would like to know which systems labs are currently using and a
review of that system. I apologize if this was not clear!

Thanks everyone!!

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

15 Aug 2009 - 3:27pm
Nickgould
2009

Look, nobody said eye-tracking is a substitute for talk aloud, and
nobody said it was a perfect methodology. We use it in conjunction
with talk aloud for many of our tests and we find it actually does
provide additional value. There are certainly issues: 1) the
technology is still buggy, and 2) the analysis can be difficult and
time consuming.

Issue #1 is being resolved and will ultimately work itself out. For
issue #2 it is the responsibility of the user (us) to deploy the
eye-tracker in a way that is sensible, given its limitations and to
exercise restraint in our interpretation of the results -- i.e.
don't "read-into" the results, just report what the users saw and
didn't see. We NEVER claim to know what the user was thinking (i.e.
we don't "infer" from these observations), unless the ET data
stimulates a discussion during talk aloud and the user reports their
impressions or thoughts directly.

I agree with Jared that user research is not a science. Clients are
not confused about this fact (normally) where talk aloud usability is
concerned. However, they are quite tempted to view eye-tracking as a
somehow more scientific and therefore more valid methodology. Again,
it's our responsibility to be straightforward about the tool's
capabilities and shortcomings.

I also agree with Jared that the hardware, software, and training
investment associated with ET are significant given the current state
of the technology. And, yes, you can get GREAT results as a user
researcher without spending this money. Kristen, you will need to
decide whether you and your clients will value the results enough to
justify the cost. However, do not think that just buying the machine
is enough. There is no user manual and the learning curve is steep.
We have heard of many firms that bought the eye tracker and it sits
collecting dust in the corner...

Where I part company, respectfully, with Jared is in his assertion
(made here and elsewhere, forcefully) that ET provides no information
that can't be learned through traditional means. That's just
factually false. Eye tracking tells you where users look on the page;
where attention clusters and the paths they take as they explore.
Users can't tell you this information. And when the question you are
asking is "do they see X?" the eye tracker can give you your answer.
It's that simple. We, and our clients, have found these answers to
be valuable. Moreover we feel that thinking through these issues has
broadened our understanding of how users interact with designs and
how to produce the most actionable results for our clients.

Anyway - that's my opinion and my firm's experience.

Nick Gould
CEO
Catalyst Group
www.catalystnyc.com

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

15 Aug 2009 - 8:28pm
Jared M. Spool
2003

On Aug 15, 2009, at 1:27 PM, Nick Gould wrote:

> Where I part company, respectfully, with Jared is in his assertion
> (made here and elsewhere, forcefully) that ET provides no information
> that can't be learned through traditional means. That's just
> factually false. Eye tracking tells you where users look on the page;
> where attention clusters and the paths they take as they explore.
> Users can't tell you this information. And when the question you are
> asking is "do they see X?" the eye tracker can give you your answer.
> It's that simple. We, and our clients, have found these answers to
> be valuable. Moreover we feel that thinking through these issues has
> broadened our understanding of how users interact with designs and
> how to produce the most actionable results for our clients.

I contend 2 things:

1) A trained observer can get much of this information through what
you call "traditional" means.

2) You can't tell from an eye tracker what the users "sees".

All you can tell from the eye tracking system is what the users
focuses their gaze on. What the user sees requires cognitive effort
the eye tracker doesn't measure. (Anyone who's had the experience of
not seeing the ketchup bottle that is clearly on the shelf in front of
them in the fridge has had the experience of gazing at something
without seeing it. My late wife called this phenomena "male
refrigerator blindness." I'm quite afflicted myself.)

When a consultant looks at eye tracking results and says, "The user
clearly sees X but they don't see Y", they are making shit up.

I know because I've tested this theory many times. Just hand eye
tracking results to three or more "experts" in eye tracking and they
will each report radically different interpretations of the data.
(Bonus test: change the thresholds on the eye tracking so that the
heatmaps and gaze paths all shift around, and it gets even more
entertaining as they try to tell you that the same user on the same
results did radically different things.)

With all due respect to Nick and his team at Catalyst, in my
experience, eye tracking is a tool that consultants use to
differentiate their services from all the consultants that don't have
eye tracking. ("Hire us because we have that eye tracking gizmo and
they don't!") When companies buy eye trackers for internal use, very
few continue to use it after a few months.

As Nick said, it's a great expense and takes real skill and expertise
to operate. Plus, there's no common understanding or best practices on
how to use it. Minor adjustments to the device, such as setting the
capture thresholds, will report radically different results, as it
captures more or less gaze data which can be very noisy. You control
the amount of noise by adjusting the thresholds, but that also can
miss important gaze data. There's no standards or common understanding
as to what the ideal settings are. (In fact, they are very specific to
local lighting conditions, physiology of the subject, and other local
contextual conditions. So, from one day to the next, the device
reports different results.)

The lifetime cost of buying, installing, training, using, and
maintaining an eye tracker in a internal corporate setting can be
equivalent to as many as 40 additional usability sessions a year.
Personally, I'd rather get the data from the 40 additional users than
spend it diddling with an ineffective piece of hardware.

That's my opinion. I'm a researcher who doesn't (any more) use an eye
tracker.

Jared

15 Aug 2009 - 11:00pm
Nickgould
2009

Oof, it's late, but hell, I'll take another swing... :-)

Jared, everything you say is true regarding the limitations of the
tool. I stated as much myself in the earlier post.

Perhaps "see" was a loaded term to use as I didn't mean to imply
that we understood the user's cognitive process - just what their
gaze fixated on. I thought I made that clear, but I guess not.

Jared, I would really appreciate it if you would share the data - or
even the methodology for how you tested your theory. I would conduct
that same test tomorrow to settle this once and for all. If there is
a demonstrable basis to completely invalidate eye tracking then I
would really want to know about it! Contrary to your assertion that
we "make shit up" or bamboozle our clients with "gizmos" we
actually do give a crap about the quality of our work (and our
reputation, with our clients and peers) and we wouldn't be using the
method if we didn't think it had value.

As effective as this forum is as a platform for debate (at least we
can use more than 140 characters), how about we continue it in person
- over a beer -- next time you're in NYC? An "eye tracking beer
summit," if you will. At a minimum, I'd like to try to persuade you
that not everyone who uses an eye tracker is a snake-oil salesman...

NG

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

15 Aug 2009 - 11:14pm
Nickgould
2009

Aw, sorry Kristen in all the fuss I missed your real question in
there... We use the Tobii system. Get in touch directly if you are
interested in discussing our experiences in more detail.

Best,

NG

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

16 Aug 2009 - 5:38am
Alan James Salmoni
2008

Hi Kristen,

Apologies from coming late. I've had some experience with eye
tracking during my scientific work and to be honest, it probably
wasn't worth the effort. Eye tracking measures immediate visual
focus (and not necessarily attention - it is possible that the two
can be split on occasions) and it's difficult to infer from this up
to the higher levels of cognition that usability work is most often
concerned with. Well used (i.e., experienced analysts, good
participants, an appropriate task, and suitable conditions), it can
shed light on the low level features of something, but miss any of
these out, and the data are probably misleading.

I used this system a while ago and cannot remember what its name was
sorry, but I remember about 30% of participants being unmeasurable.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

16 Aug 2009 - 10:53am
jeffpotter
2009

I couldn't help thinking about this discussion as I read over this post: http://scienceblogs.com/cortex/2009/08/information.php ( via @kathysierra )

"why even smart people can be so stubbornly ignorant in the face of reality."

16 Aug 2009 - 1:14pm
jeffpotter
2009

(continued) " %u2026 we don't treat all information equally. My salient fact is your irrelevant bit; your necessary detail is my triviality. Here's the paradox of curiosity: I only want to know more about that which I already know about."

Having just had an incident of 'male refrigerator blindness' I am still baffled by the occurrence. Motivation focuses attention, but perception is evidently disconnected, like a cognitive blind spot.

some interesting notes on vision and perception http://www.theflickingfingers.com/bl_spot.html

16 Aug 2009 - 11:26pm
msweeny
2006

Booyah Jared and spot on, of course.

marianne
msweeny at speakeasy.net

-----Original Message-----
From: discuss-bounces at lists.interactiondesigners.com
[mailto:discuss-bounces at lists.interactiondesigners.com] On Behalf Of Jared
Spool
Sent: Friday, August 14, 2009 11:24 AM
To: Kristen; Joshua Porter; IxDA
Subject: Re: [IxDA Discuss] Eye-Tracker software/hardware recommendations

On Aug 14, 2009, at 7:48 AM, Joshua Porter wrote:

> Interesting take from Google on their use of eye trackers:
>
> "In addition to search research, we also use eye-tracking to study the
> usability of other products, such as Google News and Image Search. For
> these products, eye-tracking helps us answer questions, such as "Is
> the 'Top Stories' link discoverable on the left of the Google News
> page?" or "How do the users typically scan the image results - in
> rows, in columns or in some other way?"
>
> Eye-tracking gives us valuable information about our users' focus of
> attention - information that would be very hard to come by any other
> way and that we can use to improve the design of our products.
> However, in our ongoing quest to make our products more useful,
> usable, and enjoyable, we always complement our eye-tracking studies
> with other methods, such as interviews, field studies and live
> experiments."
>
>> More here:
>> http://googleblog.blogspot.com/2009/02/eye-tracking-studies-more-than
>> -meets.html

Ok, may this warrants a more serious response.

The problem with Google & everyone else's use of eye tracking is that it
requires a leap of faith from observation to inference.

We can see the observation clearly and most of the time, we can agree on
them. An observation is that the user's gaze was recorded on a specific x/y
coordinate for a specific time period. Another observation might be that the
device didn't record any gaze fixations on a different x/y coordinate.

We might also observe that the first x/y coordinate matches up with a link
to a news story. The second x/y coordinate matches up with an advertisement.

So, we could conclude that the fixation of the user was on the news because
they were interested in it. And that they didn't look at the ad because they
weren't interested.

But that conclusion could be very flawed. Assuming we can account for any
calibration errors in the device (where the x/y coordinates didn't actually
match the news link or ad -- a frequent occurrence in state- of-the-art eye
tracking systems), we still don't know the brain activity behind the gaze
fixations.

Maybe they stared at the news link because they were completely baffled by
the headline? Maybe they didn't realize it was a news headline and thought
it was something else?

Maybe they actually saw the ad in a quick, transitive glance that was too
fast for the eye tracker to pick up? Maybe they registered the ad out of
there peripheral vision, beyond that of the foveal focus region? (Many eye
trackers won't show an experienced user's eyes moving to scroll bar even
though they move their mouse there to scroll. It seems they acquire the
scroll bar with peripheral vision, keeping their focus on the items of
interest on the screen.)

Jumping too quickly from observation to inference is the #1 cause of design
problems. We assume things without eliminating other possibilities, make
assumptions, and run with them. Spending a little more time to test our
inferences, to ensure we've properly qualified them and eliminated
alternative explanations can save a lot of energy and downstream problems.

(I've written about this in an article called "The Road to
Recommendation":
http://www.uie.com/events/roadshow/articles/recommendation/)

So, here's the problem with eye trackers: Every inference must be tested
without the eye tracker. As the folks from Google say:

> we always complement our eye-tracking studies with other methods, such
> as interviews, field studies and live experiments.

Fact is, had they started with the other methods, they wouldn't have
discovered anything new in the eye tracker. And the other methods are
cheaper, more efficient, and more beneficial.

There is one advantage to eye tracking hardware. On a recent visit to the
Googleplex, I asked about their usage their and this observation/ inference
problem. They agreed with me, but told me about the "real"
reason they use the devices.

It turns out, the engineers and developers are more likely to attend
usability tests when the eye tracker is in use. In the few labs they have
that aren't outfitted with the devices, the engineers and developers rarely
attend. They line up to watch eye tracking tests.

For that purpose, the device may have some value. But so does good chinese
food. I've found that a quality catering job is much more cost effective
than mucking with the toys. (At Google, that might not work so well, since
they have four-star chefs in their cafeteria -- hard to top that with
catering.)

That's my more serious response. It pisses off eye-tracking aficionados
world-wide. I'm good with that.

Jared

Jared M. Spool
User Interface Engineering
510 Turnpike St., Suite 102, North Andover, MA 01845
e: jspool at uie.com p: +1 978 327 5561
http://uie.com Blog: http://uie.com/brainsparks Twitter: @jmspool

________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... discuss at ixda.org Unsubscribe ................
http://www.ixda.org/unsubscribe List Guidelines ............
http://www.ixda.org/guidelines List Help ..................
http://www.ixda.org/help

16 Aug 2009 - 11:44pm
Sharon Greenfield5
2008

Uh. When we use eye-tracking we don't assume they clearly saw X and
not Y. We ask. It's science. Use all the tools available to you
integrated.

>
> When a consultant looks at eye tracking results and says, "The user
> clearly sees X but they don't see Y", they are making shit up.

17 Aug 2009 - 3:23am
Caroline Jarrett
2007

Jared:
> When a consultant looks at eye tracking results
> and says, "The user clearly sees X
> but they don't see Y", they are making **it up.

And using their tools badly.

What's with the hate campaign on eye-trackers, Jared?

This reminds me of the olden days when we first had video. It was an
expensive technology. It took time to learn to use, and to learn how to use
it properly. Agencies used it as a differentiator ("we have video!!!") when
they weren't strong enough to differentiate on the quality of their
thinking. Clients liked it. It was a (relative) waste of money.

Then, I found that the real benefit was that it allowed me to show clients,
quite easily, things that I'd had to learn how to see in many, many
sessions.

Eventually, video got cheap and easy. Now I'd routinely record stuff because
the overhead is minimal and sometimes, but by no means always, it's helpful
to show clients selections from the records and even (gasp) sometimes to
watch them myself to remind myself of something.

Now - fast forward to eye-trackers. An expensive technology, etc etc, right
up to (relative) waste of money. It's just technology! It's not a magic
bullet that will help the hard-of-thinking to do a better job.

And coming to the substantive point: I've used TOBII eye-trackers. They are
indeed expensive, the software is expensive, and it's expensive to keep up
with the upgrades. I looked quite seriously at getting one for a client who
had a short-term surplus money problem, but it was easy to decide to hire
one on the occasions that we considered it might be useful. They are
relatively easy to use, but they don't infallibly track everything.

Best
Caroline Jarrett
"Forms that work: Designing web forms for usability"
www.formsthatwork.com

17 Aug 2009 - 8:35am
Todd Warfel
2003

On Aug 15, 2009, at 2:52 PM, Jared Spool wrote:

> User research, when done well, isn't a "science" at all. It's an
> engineering tool. If you have to demonstrate its scientific validity
> (and deal with the fact that the people you're working with perceive
> it as a "soft science"), then you've already lost the game, in my
> opinion.

This is more a sign of an internally broken corporate culture than
anything else.

Cheers!

Todd Zaki Warfel
Principal Design Researcher
Messagefirst | Designing Information. Beautifully.
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at messagefirst.com
AIM: twarfel at mac.com
Blog: http://toddwarfel.com
Twitter: zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

17 Aug 2009 - 8:40am
Todd Warfel
2003

On Aug 15, 2009, at 9:28 PM, Jared Spool wrote:

> When a consultant looks at eye tracking results and says, "The user
> clearly sees X but they don't see Y", they are making shit up.

What eye tracking doesn't tell you is why they were focusing on "X."
Okay, so, yeah, their eyes were gazing at this object in the center
left of the page of .08 microns of a second more than the object 40
pixels to the right of it. Uh, huh... and so what?

This is the Web. It's about moving, interacting, finding, exploring.
Fixation doesn't really measure anything other than how long they
looked at what. As a designer, I don't care about fixation, I care
about discovery, interaction, transactions. Fixation doesn't tell me
that, it doesn't show me that. Watching someone use a system and
watching what they interact with does.

Inferring anything from fixation is sketchy at best.

FYI, I've used eye-tracking systems in the past and even the people
who are ET advocates will tell you that by itself, it's pretty much
just a good marketing tool. Personally, any study that only uses ET, I
wouldn't put an ounce of faith in. Just give me a person I can watch
and talk to.

Cheers!

Todd Zaki Warfel
Principal Design Researcher
Messagefirst | Designing Information. Beautifully.
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at messagefirst.com
AIM: twarfel at mac.com
Blog: http://toddwarfel.com
Twitter: zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

17 Aug 2009 - 9:00am
Jared M. Spool
2003

On Aug 17, 2009, at 4:23 AM, Caroline Jarrett wrote:

> Jared:
>> When a consultant looks at eye tracking results
>> and says, "The user clearly sees X
>> but they don't see Y", they are making **it up.
>
> And using their tools badly.

Yet, that's what they do. Remember Spool's First Law of Competency: It
takes no skill to do something poorly. So, if you don't have skills,
you'll use the tools badly.

> What's with the hate campaign on eye-trackers, Jared?

I don't hate eye-trackers. I think, as a piece of hardware, it's very
cool. It's got lots of great applications. Hell, I even worked on
projects for the Navy, Army, and NASA that made very cool use of eye
tracking.

In fact, I'm surprised that the IxD world hasn't jumped all over these
devices. They bring a level of interaction (using eye movement to
control the device) that you can't get otherwise. Imagine popping up
menus by staring at a specific button, then selecting the right object
by just fixating on a control handle, for starters. Combine it with
touch and voice, and you have a really huge increase in multi-modal
interaction. Lots of interesting possibilities here.

> This reminds me of the olden days when we first had video. It was an
> expensive technology. It took time to learn to use, and to learn how
> to use
> it properly. Agencies used it as a differentiator ("we have
> video!!!") when
> they weren't strong enough to differentiate on the quality of their
> thinking. Clients liked it. It was a (relative) waste of money.
>
> Then, I found that the real benefit was that it allowed me to show
> clients,
> quite easily, things that I'd had to learn how to see in many, many
> sessions.
>
> Eventually, video got cheap and easy. Now I'd routinely record stuff
> because
> the overhead is minimal and sometimes, but by no means always, it's
> helpful
> to show clients selections from the records and even (gasp)
> sometimes to
> watch them myself to remind myself of something.

Maybe I don't get this analogy because every lab I've ever worked in,
starting back when we built the first software usability testing lab
had video. Costs have definitely come down, but that's not what we're
talking about. The value of video has always been understood.

> Now - fast forward to eye-trackers. An expensive technology, etc
> etc, right
> up to (relative) waste of money. It's just technology! It's not a
> magic
> bullet that will help the hard-of-thinking to do a better job.
>
> And coming to the substantive point: I've used TOBII eye-trackers.
> They are
> indeed expensive, the software is expensive, and it's expensive to
> keep up
> with the upgrades. I looked quite seriously at getting one for a
> client who
> had a short-term surplus money problem, but it was easy to decide to
> hire
> one on the occasions that we considered it might be useful. They are
> relatively easy to use, but they don't infallibly track everything.

This isn't about making eye trackers more cost effective. It's about
whether they add *any* value at all.

I contend they don't.

I contend, at best, they are theatrical devices to demonstrate a
theory of use gleaned elsewhere. If we just declared them as that --
as, like the video you talked about, a tool "to show clients things
that you'd learned" -- I'd be ok. We state clearly that we use them
purely for demonstration purposes. I could get behind that.

Where I start on my little rants is when people contend they'll learn
something from an eye tracker that they can't learn by just watching
users. It's not true. There's no evidence to support that statement.
And, what they claim they learn is often wrong. Wrong learnings lead
to wrong decisions. That's bad.

And, when people tell me that a nice TOBII system costs $30,000+, I
look at that money and see 60 non-eye-tracking usability sessions.
That's getting the team in front of 60 people, where they'll learn far
more than by watching the red dot bounce around the screen for a small
handful of folks with the budget they have left after they bought a
device that will inevitably sit in a corner months later.

If we can all agree that eye trackers are great tools for
demonstration purposes only, then I'll stop my "hate campaign," as you
called it.

Jared

16 Aug 2009 - 10:29pm
James Breeze
2009

Kristen,

I use Tobii eye trackers.

For a lab you will need:
1) Tobii Studio Professional software (or Enterprise software if you
need external observation capability).
2) Eye tracking hardware.
- a T60 for testing websites
- a T60 XL for testing larger screen sizes, e.g. business
applications
- an X60 for testing really large screens.

Tobii eye trackers are by far and away the worlds%u2019 easiest eye
trackers to use and they work with well over 90% of the population.
The software is continually being improved, and in a few months it
will provide a complete lab solution, from survey capability through
to automated multivariate data analysis.

In terms of the ongoing discussion, I must correct a few misguided
points:

Misconception: It%u2019s expensive
You can rent a Tobii eye tracker for a month, or even as little as a
day (we are trialling this in Australia) which makes it accessible
for most businesses.

Misconception: It%u2019s buggy
As Nick says, it is improving. Just make sure your software is kept
up to date as recent software is leaps and bounds ahead of earlier
versions.

Misconception: You need extensive training
Can you use Morae? Tobii Studio runs on the same (Camtasia) drivers
as Morae, and is no harder to use. Of course, once you use it, you
must analyse and interpret that data correctly. If you have the
cognitive capacity to interpret usability testing results then you
will be able to interpret eye tracking data. After all, it%u2019s all
research and relates to the justification of your claims.

Jared, don%u2019t forget engineering is science too. I wouldn%u2019t
want someone who doesn%u2019t understand the scientific method to do
any usability testing for me.

Misconception: Eye tracking is compelling, but so what?
If the techies like it, great! It%u2019s better to have them on your
side that not! Eye tracking is also a fantastic way of making
usability observation more interesting. If you can see someone%u2019s
eye gaze moving around on a site in real time it allows you (the
observer) and your client(s) to draw so many more insights from what
is being seen during testing.

Misconception: Eye tracking is THE answer:
Eye tracking is to usability testing, what card sorting is to
Information Architecture. You don%u2019t always have the time, money
or need to do it but it does provide a greater level of insight if
you can incorporate it into your study.

Misconception: The Think Aloud method produces the same insight that
you would get from an eye tracking study.
This is just not true. The Think Aloud method is very distracting to
the participant. A persons%u2019 cognitive effort is split between
talking to an irritating experimenter and doing a complex task. It
is quite stressful for the test participant.

People are generally not good at describing what they are doing, or
expressing how they feel. Immediately following a Tobii eye
tracking test you can replay a movie which accurately shows where the
participant looked. This triggers their memory of what they did and
they can then talk to you about their experience, at their own pace.
This part of the process is absolutely critical, as knowing where
people looked is not enough to tell you why they looked there.

In summary, eye tracking is a very useful tool if the tests are set
up in the right way and the data analysed insightfully.
Part of the problem with eye tracking studies is that so many of the
published or blogged results are misinterpreted and invalid, as there
was little thought or consideration behind how the test was set up,
what was reported or what it really meant.

We can only hope that in the future more usability researchers are
exposed to fantastic tools such as Tobii Studio software and that
they have the scientific research skills to interpret the data.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

17 Aug 2009 - 10:12am
Guy Redwood
2009

Cards on the table.
I love eye tracking. It%u2019s the sharpest tool in the box for user
experience research. It%u2019s the best way to observe natural
behaviour. Think-aloud in usability testing is unnatural and can
create false data.

More cards on the table.
I%u2019m appalled at some of the work carried out by some of the eye
tracking companies. If you ever observe a user being encouraged to
think aloud during an eye tracking session, just walk out and find a
better research company.

In simple terms, there are two ways in which you can use eye tracking
equipment - Quantitatively and Qualitatively. Modern eye tracking
equipment is simple to use. Modern eye tracking equipment takes 30
seconds to set-up. Modern eye tracking equipment is non-invasive.
Modern eye tracking equipment is very accurate. Don%u2019t listen to
the flat-Earthers, the earth is round and eye tracking is not a Ouija
Board.

In quantitative analysis you can show people stimulus and measure how
they visually engage with it. Ultimately allowing you to benchmark the
effectiveness of designs against the original brief. The user should
not talk during the sessions as users will look at what they are
talking about and produce false data (such as f-patterns?). There are
varying degrees of inference with this type of research - but you keep
this in mind when using the evidence to make decisions. Who
doesn%u2019t want to see where people look when reviewing design
options?

In qualitative analysis, you sit people in front of an eye tracking
monitor and get people to engage with stimulus in a natural way. As
an example, if it%u2019s a website you%u2019re researching,
you%u2019d start the participant off at google and ask them to go buy
that thing they%u2019d just been talking about. Once the user has
completed their tasks, the practitioner would play back the eye
tracking to the user. This is where the really cool stuff happens.
Giving a participant the visual cue of where they looked, allows them
to recall conscious and subconscious strategies. If they looked at
something and didn%u2019t see it, they will tell you they didn%u2019t
see it. In fact, they will tell you why they didn%u2019t see it. The
ketchup in the fridge analogy is very important and demonstrates why
it%u2019s important to use a retrospective methodology in testing.
Inference has no place in qualitative analysis in eye tracking
studies.

Here%u2019s the real deal.

Asking users to think-out-aloud adds a huge cognitive load to the
user. So much extra cognitive load, that users are more likely to
fail a task because of the thinking out aloud.

Over 60% of your behaviour is automatic and we don%u2019t know
why/how we do things. We just do stuff.

So, when a user is thinking-out-aloud in research - just what are
usability practitioners listening to? A projected persona?

Google sums this up in a nice quote from 2008:

%u201Cpeople are masters of saying one thing and doing another%u201D

Eye tracking allows us to see what they do and retrospective review
gives us deep insight into why. Nobody should be making inferences
from heatmaps.

Please move away from the basic red dot bouncing on a screen
argument, and think about what you could do with a tool that allowed
you to really understand user strategies, indepth.

Guy Redwood
founder of SimpleUsability
http://www.simpleusability.com

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

17 Aug 2009 - 1:41pm
Todd Toler
2009

I'm ambivalent on the issue of ET... always finding the results interesting to look at but not having yet pulled the trigger in commissioning it for an actual project. I'm concerned to see it positioned against the pragmatic, probing-question style of think-aloud protocols, which is deservedly a mainstay of the commercial usability lab. Let's not step back into the behavioralist's world of emphasizing rigid experimental conditions in HCI research - which Mr. Nielsen and Mr. Spool have bravely led us out of.

17 Aug 2009 - 3:43am
tom smith
2009

I have limited experience with eye-tracking but, for me, you haven't
covered the most important reasons to use it.

1. Big bosses love it... it's a persuader.... it's science-y but
funner.

2. Talk Aloud, when a participant is watching their video, becomes
"Post Talk Aloud" and they can tell you what they were thinking and
doing. This approach is MUCH more informative and inclusive /
collaborative rather than talk aloud where the participant is centre
stage and "being tested"... this shift is subtle but hugely
important.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

16 Aug 2009 - 7:39pm
Robson Santos
2008

Hello, Kristen
In our lab we have a Tobii eye tracker, using Tobii Studio.
An eyetrack study is not enough for analyzing usability, but it can be
very usefull to collect specific data about screens layouts and
interface elements positioning.

If you have budget enough, I do recomend an eyetracker hardware. But
before buying one, be sure that you have good cameras and
hardware/software for video edition.

Best regards,

Robson Santos, D.Sc.
Senior Usability and User Experience Researcher
[+55 92] 8407 0523
[+55 92] 2126 1101
http://www.robsonsantos.com
http://interfaceando.blogspot.com

On Thu, Aug 13, 2009 at 6:23 AM, Kristen<kristenm at pmgintelligence.com> wrote:
> I am currently setting up a user research lab and am looking into
> purchasing eye-tracker software/hardware. I'm wondering what other
> labs use and the pros/cons of those systems.
>
> Thanks!
>
>
> ________________________________________________________________
> Reply to this thread at ixda.org
> http://www.ixda.org/discuss?post=44684
>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help
>

17 Aug 2009 - 5:51am
ritchielee
2009

I think a solid example is needed to vouch any true benefits beyond
agency differentiation. Monitoring attention in such pinpoint detail
seems a distraction for all parties, from what are probably
fundamental design issues. Observation, interview and heuristics are
much stronger methods; which should be exploited fully before donning
headgear.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

16 Aug 2009 - 2:01pm
Chuck Martin
2009

Well, I feel like I'm taking a huge risk here, both being late and
new to the party and not being a degreed, practicing professional in
this specific field, but I'm having trouble staying silent because I
can't fully support Jared's assertions.

First he says that "A trained observer can get much of this
information through what you call 'traditional' means." Yet the
perception that I'f had in reading his responses here give me the
impression--accurate or not--that eye trackers provide no additional
useful information for a "trained observer." As a technical
communication professional who has a formal degree in the field and
has practiced for nearly two decades, the difference between the
absolute statements and "much of the information" stood out.

I don't claim to be an expert in eye tracking, but I do know that
it's rare to have a situation where it's not good to have too much
data, especially if the data is good. I do know, thanks to my
training, the mechanics of how people actually read, that their eyes
do not follow text but rather stop at points on the page called
saccades. I understand that on web pages and in web and standalone
applications, users' eyes scan similarly, not to read and
comprehend, but to find. Although currently expensive and requiring
specialized training, it seems to me the eye tracking data adds to
the rest of the information that a competent user researcher can
gather using other "traditional" methods. And although eye tracking
data can be interpreted differently by different "experts," so too
can other, non-eye tracking data.

So if you don't believe in eye tracking, fine. But I'm not
convinced that the data that the technique gathers isn't valuable,
and even useful when combined with other data. (I should add that I
don't believe that eye tracking data should be used in isolation.) I
think it is *a* useful tool, certainly not the only useful tool, and
can provide additional useful information that can only lead to
better design. And isn't that the ultimate goal?

Second, Jared said that "You can't tell from an eye tracker what
the users 'sees'." True, but I think that assertion misses a
point. Eye tracking data can lead researchers to investigate why
users "looked" at a particular point, why they focused on a
particular point, why they tracked in certain directions. They can
then delve into whether anything at those points was actually
"seen," and whether anything at those points was recognized and
understood. Eye tracking can also discover whether areas that
developers and designers *want* users to see are in fact looked at,
which can lead to more questions from researchers.

I don't think anyone can reasonably claim that an eye stopping on a
particular point equates to its owner actually "seeing" anything
there. And I agree that anyone who suggests that is "making shit
up." Anyone who uses only eye tracking data to generate conclusions
clearly isn't dong the job competently and completely.

OK, we don't (yet) have developed best practices and theories for
use of eye tracking technologies. (As far as I know anyway; I know
too that there's an eye tracking lab at the University of
Washington's department of Human Centered Design and Engineering
(formerly Technical Communication), and it wouldn't surprise me at
all to learn that research done in that lab has found some best
practices and theories.) But we do know that useful data can be
captured, and used correctly, can improve test results.

So then the question comes down to: Is it worth the investment?

That's a question that only an individual company can decide. But if
you've already decided to invest in usability, then you're (finally)
already in a better mindset to make a better product.

And to be honest, if you actually get nothing more out of it that get
more members of the team observing your tests (as Jared suggests, it
simply draws out the geeks), then that can only be a Good Thing. I
know how hard it is to get programmers and other engineers to observe
usability testing. And I also know how important it is to have them
there.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

17 Aug 2009 - 1:21pm
Jesse Zolna
2008

Great discussion!! Really enjoying all the thought and insights (on
both sides).

In general, I have to side on the pro-ET side here. The %u201Cmale
refrigerator blindness%u201D and peripheral vision problems are
important, and must be considered. But don't these problems have
corollaries in talk-aloud (e.g., unnatural behavior or telling the
interviewer what you think they want to hear) and every other method?

I must say that I think the characterization of ET data
interpretation as %u201Cmaking shit up%u201D is unfair for two
reasons. First, making shit up is the basis of data interpretation.
Couldn't giving several %u201Cexperts%u201D the same talk aloud
protocol and test interface also lead to more than one story?
Claiming that ET interpretation is more susceptible to variance might
be more fair (and less abrasive). Secondly, the reliability of
analysis is only as good at the analyst. If the experts surveyed
could not tell that the data with different thresholds were the same,
or even worse did not ask what the thresholds were, I think the
problem may lie with their approach to analysis (or level of
expertise). A true expert knows the limitations of ET (and any
method they use, including talk-aloud and task-based protocols, both
of which have plenty of limitations) as well as important ways to
expose and deal with those limitations.

My take is that ET work should be reserved for specific situations,
and ET is best used to complement other methods. Handled with care,
I think ET can provide quite a bit of insight.

Jesse

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

17 Aug 2009 - 7:10pm
Jay Eskenazi
2009

When used together with other user experience research, Eye tracking
offers tremendous value for improving products.

Eye tracking measures unconscious behavior - and provides data that
people simply cannot verbalize in other common user research methods,
especially think aloud usability testing protocols. Decades of
psychology research show that much human behavior occurs at an
unconscious level.

The human eye, for example, can make up to 5 fixations per second and
this occurs below people's level of conscious awareness. So in a 30
second scan of a typical homepage, the customer may be looking at up
to 150 items on the page. Your customers (or research participants)
simply cannot verbally tell you where their eyes are going and this
is exactly the value that good eye tracking data provides.

Knowing what people are looking at - and in what order - is essential
information for improving interaction and visual design on websites.
Creating a design that guides eye flow in a way that meets both
business and user needs is an essential part of an "easy-to-use"
design.

Our experience is that visual attention data IS correlated with
behavioral performance metrics. If people don't "see" something,
then they are less likely to click it. Rather than waiting many weeks
or months to see live site click-thru behavior, you can incorporate
eye tracking earlier in your product cycle to make the design process
more efficient.

For example, if you are working on several iterations of your
homepage in a redesign effort (common scenario for most internet
companies!), knowing which design better captures visual attention
and how it is distributed across the page elements - is a critical
input to iterative design work. And
as we all know, sometimes its much harder (or impossible) to change
things later, so there's a huge advantage to obtaining this data as
early as possible in the product design cycle and not waiting until
later...

disclaimer - yes, we provide eye tracking!

http://www.customerexperiencelabs.com/services/eye-tracking-lab/

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

18 Aug 2009 - 4:29pm
ritchielee
2009

I'm starting too see that both camps are not open to persuasion; and
I'll admit I'm still rediculously far from accepting any worthwhile
ROI.

@jay. We know users cannot verbalise their eye movements; and we know
they scan everywhere at break-neck speed looking for something to
click. We can design for that without any eye tracking
interpretations.

A statement like:

'Rather than waiting many weeks or months to see live site
click-thru behavior, you can incorporate eye tracking earlier in your
product cycle to make the design process more efficient.'

- really is overkill for: Make sure your navigation is obvious.

Following a design lifecycle with observation, interview, heuristics
and iterative evaluation is all that is necessary to uncover any
issues.

As Darth said 'Search your feelings' 'you know it to be true'

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

19 Aug 2009 - 9:17am
Guy Redwood
2009

What sort of ROI are you wanting? A 60% uplift in sales because we
understood how users were subconsciously making decisions? 22% uplift
in sales because we saw the relationship between the use of language
and the task? All 'stuff' that think-aloud wouldn't show you.

The way I see it, the world falls into 4 camps.

Camp 1
Those that hate eye tracking because they have only experienced poor
research.

Camp 2
Those that view eye tracking as a threat to their career and just
follow and bleet negative comments with no reference.

Camp 3
Those that use eye tracking to occasionally supplement research and
publish heatmaps and inferences, to fuel the paranoia of camps 1 and
2.

Camp 4
Those that use it as a core tool for user experience research and
possibly don't worry too much about the flat-earthers that are
missing a huge opportunity to get inside the users' heads.

Which are you?

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

20 Aug 2009 - 3:30pm
Jared M. Spool
2003

On Aug 19, 2009, at 7:17 AM, Guy Redwood wrote:

> Camp 4
> Those that use it as a core tool for user experience research and
> possibly don't worry too much about the flat-earthers that are
> missing a huge opportunity to get inside the users' heads.
>
> Which are you?

How about Camp 5?

Those who have used it for going on 15+ years, since the early days of
the first systems, and conducted much of the ground-breaking research
that was the basis of what we know about how people interact with
systems, and after hundreds of sessions has thoughtfully decided the
equipment doesn't offer any real added value to other established
practices.

That's where I am.

This is not about being a flat-earther. This is about actual,
substantive, useful value.

If y'all want to buy your toys and play with them, please feel free to
do so with your heart's content. I have no problem with that. In fact,
I encourage it. Play time is important.

However, let's keep clear on what the actual data from eye tracking
tells us. It can't tell us what the user sees. It can't tell us what
the user doesn't see. It only tells us what they gaze at, which from
my experience of working with the technology, isn't really that useful.

There is no huge opportunity "to get into the users' head." That's a
myth propagated by people who spent a lot of money on equipment that
doesn't do anything but confirm whatever version of the world they
want it to confirm.

Remember, if you torture any data hard enough, it will confess to
anything you want. Eye tracking is the waterboarding of usability data.

Feel free to dismiss my experience if you want, but that's where I'm at.

Jared

20 Aug 2009 - 3:41pm
Elizabeth Buie
2004

At 4:30 PM -0400 8/20/09, Jared Spool wrote:

>However, let's keep clear on what the actual data from eye tracking tells us. It can't tell us what the user sees. It can't tell us what the user doesn't see. It only tells us what they gaze at, which from my experience of working with the technology, isn't really that useful.

Except if you're studying driver behavior while texting or something. I saw a
video today in which they described the use of eye tracking to learn just how
long drivers trying to text (in simulated conditions) took their eyes off the
road.

I know that's not what you're talking about Jared, so I'm not really arguing
against you. I'm just pointing out that there are some cases where it *is*
helpful -- even necessary -- to know where people are looking, and for how
long. Or, more specifically, where they are NOT looking.

Elizabeth
--
Elizabeth Buie
Luminanze Consulting, LLC
www.luminanze.com
@ebuie

20 Aug 2009 - 3:55pm
Sharon Greenfield5
2008

Good point both of you.
Really, what it comes down to to is the understanding of the
difference between seeing and looking.
And knowing which data you are attempting to gather, will probably
help you decide what methodology to use.

On Aug 20, 2009, at 1:41 PM, Elizabeth Buie wrote:

> At 4:30 PM -0400 8/20/09, Jared Spool wrote:
>
>> However, let's keep clear on what the actual data from eye tracking
>> tells us. It can't tell us what the user sees. It can't tell us
>> what the user doesn't see. It only tells us what they gaze at,
>> which from my experience of working with the technology, isn't
>> really that useful.
>
> Except if you're studying driver behavior while texting or
> something. I saw a
> video today in which they described the use of eye tracking to learn
> just how
> long drivers trying to text (in simulated conditions) took their
> eyes off the
> road.
>
> I know that's not what you're talking about Jared, so I'm not really
> arguing
> against you. I'm just pointing out that there are some cases where
> it *is*
> helpful -- even necessary -- to know where people are looking, and
> for how
> long. Or, more specifically, where they are NOT looking.
>
> Elizabeth
> --
> Elizabeth Buie
> Luminanze Consulting, LLC
> www.luminanze.com
> @ebuie
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help

20 Aug 2009 - 4:04pm
Andrei Herasimchuk
2004

On Aug 20, 2009, at 1:41 PM, Elizabeth Buie wrote:

> I know that's not what you're talking about Jared, so I'm not really
> arguing
> against you. I'm just pointing out that there are some cases where
> it *is*
> helpful -- even necessary -- to know where people are looking, and
> for how
> long. Or, more specifically, where they are NOT looking.

It doesn't take expensive eye-tracking devices to determine that if
someone is texting while driving, they are probably a good candidate
for a Darwin Award.

To toss this little nugget into the mix as some valid if minor
counterpoint to Jared's stated opposition to eye-tracking is really a
disservice to how much Jared actually knows about this topic, and how
much experience and expertise he has in research and technology.

--
Andrei Herasimchuk

Chief Design Officer, Involution Studios
innovating the digital world

e. andrei at involutionstudios.com
c. +1 408 306 6422

20 Aug 2009 - 4:13pm
Sharon Greenfield5
2008

And I think that calling a willing research participant in a simulated
environment, a Darwin Award candidate is a disservice to all those
that take part in our research!

On Aug 20, 2009, at 2:04 PM, Andrei Herasimchuk wrote:
>
> It doesn't take expensive eye-tracking devices to determine that if
> someone is texting while driving, they are probably a good candidate
> for a Darwin Award.
>
> To toss this little nugget into the mix as some valid if minor
> counterpoint to Jared's stated opposition to eye-tracking is really
> a disservice to how much Jared actually knows about this topic, and
> how much experience and expertise he has in research and technology.

> On Aug 20, 2009, at 1:41 PM, Elizabeth Buie wrote:
>
>> I know that's not what you're talking about Jared, so I'm not
>> really arguing
>> against you. I'm just pointing out that there are some cases where
>> it *is*
>> helpful -- even necessary -- to know where people are looking, and
>> for how
>> long. Or, more specifically, where they are NOT looking.
>

20 Aug 2009 - 4:17pm
Jared M. Spool
2003

On Aug 20, 2009, at 4:41 PM, Elizabeth Buie wrote:

> At 4:30 PM -0400 8/20/09, Jared Spool wrote:
>
>> However, let's keep clear on what the actual data from eye tracking
>> tells us. It can't tell us what the user sees. It can't tell us
>> what the user doesn't see. It only tells us what they gaze at,
>> which from my experience of working with the technology, isn't
>> really that useful.
>
> Except if you're studying driver behavior while texting or
> something. I saw a
> video today in which they described the use of eye tracking to learn
> just how
> long drivers trying to text (in simulated conditions) took their
> eyes off the
> road.
>
> I know that's not what you're talking about Jared, so I'm not really
> arguing
> against you. I'm just pointing out that there are some cases where
> it *is*
> helpful -- even necessary -- to know where people are looking, and
> for how
> long. Or, more specifically, where they are NOT looking.

But that's exactly my point. You don't need to do an eye-tracking
study to see this problem.

You can see it by just running this simulation: http://is.gd/2qCrq

Using eye tracking in this research doesn't add any value to any of
the data we already have.

Jared

20 Aug 2009 - 5:04pm
Jesse Zolna
2008

That simulation does not even come close to accurately simulating
driving while texting. I don't want to admit how I know that.
Maybe if you were trying to send a text while competing in a NASCAR
event or something.

In the ET-texting-while-driving case, the data could be used to test
whether design changes incrementally reduce the time drivers are
distracted. It is theoretically possible that several of these
changes may be combined to create a miracle device that is safe to
use for texting while driving. No think-aloud test is going to parse
out those milisecond differences, and an experiment where you compare
(simulated, I hope) accident rates would need such a high N it would
cost WAY more than an eye-tracking device.

jz

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

20 Aug 2009 - 8:16pm
Elizabeth Buie
2004

At 5:17 PM -0400 8/20/09, Jared Spool wrote:

[regarding driver studies]

>Using eye tracking in this research doesn't add any value to any of the data we already have.

You may well be right, and certainly you know a lot more about eye tracking
than I do. Let me say a little more about what I'm thinking, if I may.

As a life-or-death issue, texting while driving may need more extreme measures
than we would ordinarily apply. Not to convince those of us who know research,
but to convince people (e.g., legislators) who are in a position to do something
with the findings. The cell-phone simulation that you posted is useful for
convincing me, as an individual, that I personally cannot do both at the same
time (not that I need convincing; I don't even use the phone while driving).
But there are plenty of people who would not even try the simulation, let alone
be remotely persuaded by it, because they believe in their own invincibility.

I am not actually fully convinced that eye tracking is necessary even in the
study of texting while driving. But I *would* argue that we should consider
it carefully, because the circumstances and issues are special and the risks
are so huge.

I do have a question for you, Jared, to help me understand your point: Are you
saying that we don't need to know how much time people spend with their eyes
off the road while trying to text, or that we can get those data without doing
eye tracking? I do understand what you're saying regarding "regular" usability
testing, and I'm asking for clarification on what you're saying here regarding
studies of texting while driving. Thanks for anything you can add to clear up
this point for me.

Elizabeth
--
Elizabeth Buie
Luminanze Consulting, LLC
www.luminanze.com
@ebuie

20 Aug 2009 - 11:43am
Kate Caldwell
2009

Hi!

I have an SMI system in our facility in downtown Montreal.
I'm very interested in the discussion. The pros and cons of using ET
for usability testing seem pretty well described above.

At the same time, I dislike what I understood as the suggestion that
some practitioners are using ET to con clients. NO methodology or
tool should be offered (honestly) without being clear about its
deliverables, benefits and limitations.

As a firm, we bear the cost of purchasing the equipment and the
learning curve in order to be able to offer ET to clients. I don't
see where this costs our clients unnecessarily and I will NEVER
impose the use of ET on a client project just because it cost me to
buy it.

My job is very clearly to design and execute research that uses the
best mix of methodologies that will really serve my clients' needs.
ET is one of them.

I also agree that expertise in the area of ET needs to be developed
and that best practices can be shared. Feel free to get in touch
with me by e-mail or by phone to discuss.

Have a great day!

Kate

kcaldwell at ux-research.com
1 514 502-5862

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

21 Aug 2009 - 8:37am
Jared M. Spool
2003

On Aug 20, 2009, at 9:16 PM, Elizabeth Buie wrote:

> I do have a question for you, Jared, to help me understand your
> point: Are you
> saying that we don't need to know how much time people spend with
> their eyes
> off the road while trying to text, or that we can get those data
> without doing
> eye tracking? I do understand what you're saying regarding
> "regular" usability
> testing, and I'm asking for clarification on what you're saying here
> regarding
> studies of texting while driving. Thanks for anything you can add
> to clear up
> this point for me.

As I've stated before, eye tracking is a great research tool. If your
goal is to study human physiology and cognitive psychology, looking
for links between physiometric results and their cognitive
counterparts, I think the eye tracking systems of today are invaluable.

I also think that eye tracking is an interesting input and control
device. It's certainly as interesting as the technology embedded in
Microsoft's Big Ass Table, er, Surface device. There's a lot we could
be doing with this, especially in the area of assistive devices.

But I don't think eye tracking is useful in a production design
process, where the goal of user research activities is to effectively
inform the design for making decisions on how to improve a specific
product. We don't know enough about how to translate the raw data
emitted from the eye tracking device into the information we need to
make decisions.

(As everyone else who has tried to defend these silly devices has
done, the going thinking is, "well, you can't use eye tracking alone."
I agree. However, my position is that it doesn't add value as an
additive input. We don't need to go down this road again, in my
opinion.)

So, to answer your question, if you're talking about research from a
scientific standpoint, I think the eye tracking equipment is a great
idea. Add it to a quality driving simulator and you can learn a ton.
(A shout out to my friends at George Mason who are doing some kickass
studies in driving simulation.)

If you're talking about designing some sort of product or tool to help
you communicate while driving, I'm not sure I see the value of adding
eye tracking into your research tools. Knowing the exact milliseconds
someone is distracted probably won't help you decide on design
requirements or solutions.

Jared

21 Aug 2009 - 8:40am
Jared M. Spool
2003

On Aug 20, 2009, at 9:43 AM, Kate Caldwell wrote:

> I have an SMI system in our facility in downtown Montreal.
> I'm very interested in the discussion. The pros and cons of using ET
> for usability testing seem pretty well described above.
>
> At the same time, I dislike what I understood as the suggestion that
> some practitioners are using ET to con clients. NO methodology or
> tool should be offered (honestly) without being clear about its
> deliverables, benefits and limitations.

Kate,

I agree. Given that I'm the one making the suggestion, (and I think it
was more of an accusation than a suggestion,) I'd like to say that I
also think that we need to be honest about what we do, especially to
ourselves.

I'd be interested in hearing the disclaimers you give your clients
before presenting inferences from eye tracking data.

Jared

21 Aug 2009 - 8:59am
Elizabeth Buie
2004

At 9:37 AM -0400 8/21/09, Jared Spool wrote:

Thanks for the clear statement, Jared.

>So, to answer your question, if you're talking about research from a scientific standpoint, I think the eye tracking equipment is a great idea. Add it to a quality driving simulator and you can learn a ton.

Yes, this is exactly what I'm talking about. I'm sorry if it wasn't clear in my first post on this topic.

>If you're talking about designing some sort of product or tool to help you communicate while driving,

Nope. In fact, I am not at all convinced that that is even possible: It's a problem of the attention required by the communication, not an issue of the means used.

Elizabeth
--
Elizabeth Buie
Luminanze Consulting, LLC
tel: +1.301.943.4168
www.luminanze.com
@ebuie

21 Aug 2009 - 9:04am
Jared M. Spool
2003

On Aug 20, 2009, at 5:04 PM, Andrei Herasimchuk wrote:

> To toss this little nugget into the mix as some valid if minor
> counterpoint to Jared's stated opposition to eye-tracking is really
> a disservice to how much Jared actually knows about this topic, and
> how much experience and expertise he has in research and technology.

Apparently, I've hit a nerve. :)

I knew my views on eye tracking would piss some people off. As I said
earlier, I'm good with that.

Jared

21 Aug 2009 - 9:03am
Jared M. Spool
2003

On Aug 21, 2009, at 9:59 AM, Elizabeth Buie wrote:

> Thanks for the clear statement, Jared.
>
>
>> So, to answer your question, if you're talking about research from
>> a scientific standpoint, I think the eye tracking equipment is a
>> great idea. Add it to a quality driving simulator and you can learn
>> a ton.
>
> Yes, this is exactly what I'm talking about. I'm sorry if it wasn't
> clear in my first post on this topic.
>
>
>> If you're talking about designing some sort of product or tool to
>> help you communicate while driving,
>
> Nope. In fact, I am not at all convinced that that is even
> possible: It's a problem of the attention required by the
> communication, not an issue of the means used.

To (possibly unnecessarily) clarify further:

I have nothing against the hardware or its use.

My issue has to do with the claims we make about what we can "learn"
by using it in specific contexts, particularly when identifying issues
about a screen's design.

Jared

21 Aug 2009 - 9:32am
Elizabeth Buie
2004

At 10:04 AM -0400 8/21/09, Jared Spool wrote:

>Apparently, I've hit a nerve. :)

You stole my line. :-)

Elizabeth
--
Elizabeth Buie
Luminanze Consulting, LLC
tel: +1.301.943.4168
www.luminanze.com
@ebuie

21 Aug 2009 - 11:14am
Kate Caldwell
2009

Hi Jared,

How are you? It didn't seem you were alone in the "accusing" (your word;-)) camp.

I ALWAYS explain to clients that:

- ET does not equal measuring "seeing" (because seeing is a cognitive action), it's the CORRELATION between "seeing" and point of regard fixations and saccades we're measuring.

- ET measures foveal point of regard and NOT peripheral vision which is ALSO used by people to gather information about the stimulus whether it's a screen, a room or anything else, so yes, you can "see" things off fovea (whether they're actually there or not is another question;-))

- Calibration quality in ET is key if we are to reduce error margins to acceptable levels - error margins basically translate into a drifting of correspondances across the X and Y coords. We need to take this into account when defining Areas of Interest for analysis.

- ET sampling rate is another - different machines have different rates, so yes there can be missed data.

- Look out for how methodology can change behaviour (the "think aloud" vs "silent task" issue)

- ET results should be addressed ONLY within the scope and context of the tasks that were given to the respondents, i.e. don't use results from one task to imply something else.

- Usual caveats about sample sizes (qual vs quant) and statistical projection

Where I'm finding ET really interesting is with larger sample sizes.

We're looking right now at examples coming from the 100-person study about online surveys (a whole 'nother controversy;-)) we did earlier this year. What's interesting about that is we have ET data AND survey answers - which themselves infer that respondents "read" questions and "saw" labels because they selected items and input text answers too. Seeing how these line up - or not - is really providing some interesting learnings.

Have a great day!

Kate
kcaldwell at ux-research.com
+1 514 502-5862

________________________________
From: Jared Spool <jspool at uie.com>
To: Kate Caldwell <caldwell_kate at yahoo.ca>
Cc: discuss at ixda.org
Sent: Friday, August 21, 2009 9:40:42 AM
Subject: Re: [IxDA Discuss] Eye-Tracker software/hardware recommendations

On Aug 20, 2009, at 9:43 AM, Kate Caldwell wrote:

> I have an SMI system in our facility in downtown Montreal.
> I'm very interested in the discussion. The pros and cons of using ET
> for usability testing seem pretty well described above.
>
> At the same time, I dislike what I understood as the suggestion that
> some practitioners are using ET to con clients. NO methodology or
> tool should be offered (honestly) without being clear about its
> deliverables, benefits and limitations.

Kate,

I agree. Given that I'm the one making the suggestion, (and I think it was more of an accusation than a suggestion,) I'd like to say that I also think that we need to be honest about what we do, especially to ourselves.

I'd be interested in hearing the disclaimers you give your clients before presenting inferences from eye tracking data.

Jared

__________________________________________________________________
Be smarter than spam. See how smart SpamGuard is at giving junk email the boot with the All-new Yahoo! Mail. Click on Options in Mail and switch to New Mail today or register for free at http://mail.yahoo.ca

21 Aug 2009 - 3:50pm
Todd Warfel
2003

Oh, I'd love to know this.

On Aug 21, 2009, at 9:40 AM, Jared Spool wrote:

> I'd be interested in hearing the disclaimers you give your clients
> before presenting inferences from eye tracking data.

Cheers!

Todd Zaki Warfel
Principal Design Researcher
Messagefirst | Designing Information. Beautifully.
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at messagefirst.com
AIM: twarfel at mac.com
Blog: http://toddwarfel.com
Twitter: zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

21 Aug 2009 - 3:55pm
Todd Warfel
2003

On Aug 21, 2009, at 12:14 PM, Kate Caldwell wrote:

> I ALWAYS explain to clients that[...]

Well, based on these disclaimers, I really don't see any value in ET
at all. Instead, it leaves me wondering why I should use ET at all.

I won't claim to be an ET expert, but I have used it in the past. I've
never really been a big fan, as I think the leap from what is gathered
to inferences that are made leaves a pretty large gap. Okay, HUGE gap,
actually. It's one of those solutions looking for a problem in my
book. Yeah, it looks cool, but as a researcher, I just don't see good
quality research data coming out of it.

Cheers!

Todd Zaki Warfel
Principal Design Researcher
Messagefirst | Designing Information. Beautifully.
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at messagefirst.com
AIM: twarfel at mac.com
Blog: http://toddwarfel.com
Twitter: zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

21 Aug 2009 - 5:20pm
Kate Caldwell
2009

Hi Todd!

Interesting answer;-) Was there a caveat in particular that spoke to you more? Or maybe it was the imagined magnitude of possible errors?

I don't know of a single methodology that doesn't have limitations so I feel quite at ease pointing them out when it comes to ET - as a researcher you also need to know what they are so you take them into account in your analysis.

The things to determine are, given the limitations and the benefits of ET (also discussed above) towards what research goals can it contribute? There have been a few suggestions above.

The other big issue that we keep running into with ET discussions is the cost of the equipment and the learning curve. It does preclude an "everyone can/should do it" approach which seems to be proned by lots of folks.

I actually don't think that "everyone can" (although everyone can learn to) design research or facilitate a test or conduct a home visit or do observational fieldwork or write a screener or conduct analysis but that will be another great conversation to have on IXDA;-) Survey-monkey users and home-based recruiters unite!

I especially liked the idea Jared mentioned of assistive technologies and new ET-based forms of interaction (I read there's currently an ET system out that folks with reduced motor skills can use for gaming and Second Life, will try to find it and post back.)

Great discussion! Hopefully this is helping Kristen.

Have a great evening!

Kate

________________________________
From: Todd Zaki Warfel <lists at toddwarfel.com>
To: Kate Caldwell <caldwell_kate at yahoo.ca>
Cc: Jared Spool <jspool at uie.com>; discuss at ixda.org
Sent: Friday, August 21, 2009 4:55:09 PM
Subject: Re: [IxDA Discuss] Eye-Tracker software/hardware recommendations

On Aug 21, 2009, at 12:14 PM, Kate Caldwell wrote:

I ALWAYS explain to clients that[...]
>

Well, based on these disclaimers, I really don't see any value in ET at all. Instead, it leaves me wondering why I should use ET at all.

I won't claim to be an ET expert, but I have used it in the past. I've never really been a big fan, as I think the leap from what is gathered to inferences that are made leaves a pretty large gap. Okay, HUGE gap, actually. It's one of those solutions looking for a problem in my book. Yeah, it looks cool, but as a researcher, I just don't see good quality research data coming out of it.

Cheers!

Todd Zaki Warfel
Principal Design Researcher
Messagefirst | Designing Information. Beautifully.
----------------------------------
Contact Info
Voice:(215) 825-7423
Email:todd at messagefirst.com
AIM:twarfel at mac.com
Blog:http://toddwarfel.com
Twitter:zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

__________________________________________________________________
Get the name you've always wanted @ymail.com or @rocketmail.com! Go to http://ca.promos.yahoo.com/jacko/

21 Aug 2009 - 8:22pm
Nickgould
2009

Issues aside, this is an amazing discussion! Great points being made
on both sides.

So, for the hopper, a statement and a question for @jmspool.

First, the suggestion that Jared's position on eyetracking is a
result of his anxiety about what the technology will mean for his
methodologies / reputation / business is not sensible. Jared is a
longtime recognized leader in the usability field and could just as
easily embrace eyetracking as reject it with no particular impact on
his "career." No, I believe that Jared's opinions are based on the
merits of the technology (or lack thereof) as he perceives them. I
won't say the same about the barnacles who cling to his position to
advance their own credibility - but that's another discussion.
However, I can't quite make out why eyetracking proponents are the
specific target of affirmative attacks by you, Jared. You seem
unwilling to admit the possibility that those who find value in the
technology are anything but thieves and charlatans (or children
playing with toys). Seems that, given your professional
impermeability relating to this issue, you could just leave well
enough alone; give your opinion when asked but otherwise respect the
right of others to run their businesses as they see fit. Anyway...

My question for you, Jared: Do you place NNG / Jakob Nielsen among
the phonies? I understand that they use eyetracking quite regularly
and are about to release a book about it.

con molto rispetto

Nick Gould
CEO
Catalyst Group

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=44684

Syndicate content Get the feed