Model-based eye-tracking simulation

6 Jun 2011 - 5:16am
3 years ago
4 replies
1332 reads
jaern
2011

Hi everyone,

I stumbled across this website here: eyequant.com

They claim to offer eye-tracking results without doing actual eye-tracking. Based on actual eye-tracking studies, they have supposedly developed a model of human attention that can be used to predict the attentional effect of a website and its parts. I havn't tried it, but apparently you upload a screenshot of your website and it gives you back a heatmap and some other visualizations of the results.

Now, I am wondering if anyone has heard of this or even used it and can comment on its validity. Personally, even putting concerns about eye-tracking in general aside, this seems rather dubious to me. In an article (german, but just scroll down for the screenshots)  i saw a heatmap that showed strong emphasis on banner-ads in the right column - kind of contrary to what real eye-tracking has shown about banner-blindness.

Any thoughts?

/Jan

Comments

15 Jun 2011 - 2:47pm
agabojko
2011

You posed a very good question, Jan. "Participant-free eye tracking" has been around for a while. EyeQuant, Feng GUI and Attention Wizard all provide a similar service - you can upload an image (e.g., a picture of a web page) and obtain a visualization (e.g., a heatmap) showing a computer simulation of where people would look in the first five seconds of being exposed to the image. These companies claim a 75 - 90% correlation with real eye tracking data but I have found no research supporting their claims.

Because I was curious about the accuracy of the simulations, I submitted a homepage of an e-commerce website to EyeQuant, Feng GUI, and Attention Wizard and obtained three heatmaps. I then compared these heatmaps to the initial five second gaze activity from a study with 21 participants tracked with a Tobii T60 eye tracker.

First, let me just say that I'm not a fan of comparing heatmaps just by looking at them because visual inspection is subjective and prone to error. Also, different settings can produce very different visualizations, and you cannot ensure equivalent settings between a real heatmap and a simulated one.

With that in mind, the three simulated heatmaps were very similar to each other but looked very different from the one obtained with real participants. For example, the simulations predicted a lot of attention on images (including advertising), whereas the study participants barely looked at those elements. Participants primarily focused on the navigation and search but those areas were almost completely blank on the simulations.

The simulated heatmaps also showed a fair amount of attention on areas below the page fold but the study participants never even scrolled. In addition to a heatmap, Feng GUI produced a gaze plot indicating the sequence with which users would scan the areas of the page. The first element to be looked at was predicted to be a small image next to the page footer, well below the fold.

I believe the simulations have very limited applicability. They make predictions mostly based on their knowledge of the bottom-up mechanisms that affect our attention, failing to take into account the top-down processes which play a huge role, even during the first few seconds. The simulations may work better in cases where there is no scrolling on the page, users are completely unfamiliar with the website, and they have no task/goal in mind. But how often does that happen?

Aga Bojko

16 Jun 2011 - 4:05am
Alan James Salmoni
2008

An excellent and detailed reply, thank you. I'm guessing that the model they use has two problems:

1) The model they use to predict visual focus is not detailed enough (it doesn't account for all features that affect visual focus); or it's too detailed; 2) It is entirely context free. If someone comes to a page with a specific goal in mind (which is most people that we're interested in), then that goal will strongly influence what they do;

I'm not a fan of eye tracking and can see the same problems occuring when MRI scanners become cheap 'n' cheerful. Because MRI is affordable on a Uni departmental basis now, there are slews of studies that 'revolutionise' science while ignoring prior work in the field. The classic example is 'the location of working memory' which is written about in Nature every five years or so and totally ignores existing psychological literature that suggests that working memory is more like an emergent aspect of perceptual processes. If MRI's become affordable to individuals, we will get the same problems we see now with eye-tracking (ie, way too much inference, incorrect assumptions about cognitive attention etc).

Best,

Alan

On Thu, Jun 16, 2011 at 7:24 AM, agabojko wrote: > You posed a very good question, Jan. "Participant-free eye tracking" has > been around for a while. EyeQuant, Feng GUI and Attention Wizard all provide > a similar service - you can upload an image (e.g., a picture of a web page) > and obtain a visualization (e.g., a heatmap) showing a computer simulation > of where people would look in the first five seconds of being exposed to the > image. These companies claim a 75 - 90% correlation with real eye tracking > data but I have found no research supporting their claims. > > Because I was curious about the accuracy of the simulations, I submitted a > homepage of an e-commerce website to EyeQuant, Feng GUI, and Attention > Wizard and obtained three heatmaps. I then compared these heatmaps to the > initial five second gaze activity from a study with 21 participants tracked > with a Tobii T60 eye tracker. > > First, let me just say that I'm not a fan of comparing heatmaps just by > looking at them because visual inspection is subjective and prone to error. > Also, different settings can produce very different visualizations, and you > cannot ensure equivalent settings between a real heatmap and a simulated > one. > > With that in mind, the three simulated heatmaps were very similar to each > other but looked very different from the one obtained with real > participants. For example, the simulations predicted a lot of attention on > images (including advertising), whereas the study participants barely looked > at those elements. Participants primarily focused on the navigation and > search but those areas were almost completely blank on the simulations. > > The simulated heatmaps also showed a fair amount of attention on areas below > the page fold but the study participants never even scrolled. In addition to > a heatmap, Feng GUI produced a gaze plot indicating the sequence with which > users would scan the areas of the page. The first element to be looked at > was predicted to be a small image next to the page footer, well below the > fold. > > I believe the simulations have very limited applicability. They make > predictions mostly based on their knowledge of the bottom-up mechanisms that > affect our attention, failing to take into account the top-down processes > which play a huge role, even during the first few seconds. The simulations > may work better in cases where there is no scrolling on the page, users are > completely unfamiliar with the website, and they have no task/goal in mind. > But how often does that happen? > > Aga Bojko [1] > >

16 Jun 2011 - 12:05pm
Rami Tabbah
2010

I think the main difference is the context. Users will look for items they want and this will drive their eyes. This information can be obtained from card sorting and other methods. Now, what if we use these tools without context by putting latin, I think the results will be closer. I also think we can use them to determine if the design draws the eye to the area where we plan to put the most important items, or discover where this design draws the eye then put important items there. For example, if the wireframes defined a left nav then when the graphics design is tested, we discover that the tool does not show that the left nav is prominent, we know that the images and other elements takeover or that the contrast is not high enough. The tests you mentioned added the human intelligence and resulted in users not paying attention to the image in the way. However, if the image was smaller or was not there, and the left nav had better contrast, we will know that it will be easier for users to find it.

Personally, I would use these tools to validate the balance of the graphics elements and make sure the important elements will not require the user's brain to intervene and more the eye to another area. So, I would go with a mix of context independent tools, and proven context dependent methods such as card sorting. When it comes to how users read text and so on, it is better to rely on proven statistical research than few users in an eye tracking test and save the big bucks the eye tracking tests burn. Bottom line, we can get better results without the eye tracking tests if the right methods are used.

Rami Tabbah

On Thu, Jun 16, 2011 at 3:30 AM, agabojko wrote: > You posed a very good question, Jan. "Participant-free eye tracking" has > been around for a while. EyeQuant, Feng GUI and Attention Wizard all provide > a similar service - you can upload an image (e.g., a picture of a web page) > and obtain a visualization (e.g., a heatmap) showing a computer simulation > of where people would look in the first five seconds of being exposed to the > image. These companies claim a 75 - 90% correlation with real eye tracking > data but I have found no research supporting their claims. > > Because I was curious about the accuracy of the simulations, I submitted a > homepage of an e-commerce website to EyeQuant, Feng GUI, and Attention > Wizard and obtained three heatmaps. I then compared these heatmaps to the > initial five second gaze activity from a study with 21 participants tracked > with a Tobii T60 eye tracker. > > First, let me just say that I'm not a fan of comparing heatmaps just by > looking at them because visual inspection is subjective and prone to error. > Also, different settings can produce very different visualizations, and you > cannot ensure equivalent settings between a real heatmap and a simulated > one. > > With that in mind, the three simulated heatmaps were very similar to each > other but looked very different from the one obtained with real > participants. For example, the simulations predicted a lot of attention on > images (including advertising), whereas the study participants barely looked > at those elements. Participants primarily focused on the navigation and > search but those areas were almost completely blank on the simulations. > > The simulated heatmaps also showed a fair amount of attention on areas below > the page fold but the study participants never even scrolled. In addition to > a heatmap, Feng GUI produced a gaze plot indicating the sequence with which > users would scan the areas of the page. The first element to be looked at > was predicted to be a small image next to the page footer, well below the > fold. > > I believe the simulations have very limited applicability. They make > predictions mostly based on their knowledge of the bottom-up mechanisms that > affect our attention, failing to take into account the top-down processes > which play a huge role, even during the first few seconds. The simulations > may work better in cases where there is no scrolling on the page, users are > completely unfamiliar with the website, and they have no task/goal in mind. > But how often does that happen? > > Aga Bojko [1] > >

25 Jun 2011 - 9:09am
jaern
2011

Thanks a lot for your detailed reply, Aga. I saw that you wrote a blog post including screenshots about this topic: http://www.rosenfeldmedia.com/books/eye-tracking/blog/participant-free_eye_tracking/

For anyone else interested in this I recommend reading the post and disucssion there, it includes comments from some of the makers of these tools.

Syndicate content Get the feed