Usability Testing Vs. Solicited User Feedback for Web Sites/Apps

11 Aug 2010 - 12:49pm
4 years ago
5 replies
1568 reads
bojcampbell
2010

The topic of soliciting mass user feedback regarding the usability of web sites and applications continues to pop up for me. Primary stakeholders ask the question, "Why don't we just send a link to a survey in our newsletter to all our registered users?" or, "Let's add a feedback link or a thumbs up/down clicker to our new beta releases."

My question is whether this type of polling is valuable or if the information and the sample population is just too broad and unreliable. The cost to review the answers and organize the data doesn't seem extremely high especially if using a survey application. My general opinion is that most of the usability items can be found by testing a very small number of users and the largest issues are already well known by most people that touch the product, especially client account managers.

We already have a program for soliciting product feedback at the support level. I'd welcome any feedback about the value of soliciting usability information from the general user population directly on new pages and functions of the site.

Comments

11 Aug 2010 - 2:03pm
dantemurphy
2010

It doesn't hurt to increase the number of feedback channels, especially if you have a dedicated or fanatical user base that will endure all manner of hardships just to use your product.  Still, you won't get much feedback on abandonment or passive use through a self-directed feedback mechanism.  Sometimes you have to get into the same environment as your user (whether physically or virtually) and observe their struggles in real time to achieve real insight.

 

Dante

12 Aug 2010 - 12:05pm
Paul J. Sherman
2010

Just a contrarian point of view to share...

Sometimes it can actually hurt to increase the number of feedback channels your organization is listening to. Why? Because all channels are not created equal. Some streams of information can make a tiny problem too salient and blow it out of proportion. This can lead the business to make wrong or inefficient resource allocation decisions.

Case in point: my wife does voice and speech user interface design. One day a large client whose IVR she had designed and usability tested came to her in a semi-panic: "Oh noes, we received EIGHT complaints about our new IVR menu!" She calmly inquired how many calls the IVR received that month. The answer? About one million. Then she asked how the zero-out ("dropout") rate had changed since the new system was deployed. Answer: it had decreased by double digits. I think call handle time also decreased because callers who did need agents were navigating the IVR more quickly.

In this case, the company overvalued the eight complaints, and undervalued the actual performance data. So more is not necessarily better. More is sometimes just more. As in more noise.

Regards, Paul

Paul Sherman | ShermanUX E: paul@shermanux.com Ph: +1.512.917.1942

On Aug 11, 2010, at 8:13 PM, dantemurphy wrote:

It doesn't hurt to increase the number of feedback channels, especially if you have a dedicated or fanatical user base that will endure all manner of hardships just to use your product. Still, you won't get much feedback on abandonment or passive use through a self-directed feedback mechanism. Sometimes you have to get into the same environment as your user (whether physically or virtually) and observe their struggles in real time to achieve real insight.

Dante

11 Aug 2010 - 2:05pm
mcaskey
2008

Yeah, it comes up a lot for me too.

There are so many ways to get feedback, and for so many different reasons.

I find continuous feedback loops useful. I use a simple (but proven) feedback form to get ongoing ratings and reviews for quant and qual information.

I will also conduct surveys to get opinions on existing things.

Not sure how this would be implemented outside of the web, but for online experiences, I use A/B and Multivariate testing to get feedback on things I'm able to try in the wild. GWO, and Omniture have great tools that allow you to pose your questions to your users, and have them answer you with their actions.

Mike Caskey Denver, Colorado

On Aug 11, 2010, at 12:33 PM, bojcampbell wrote:

> The topic of soliciting mass user feedback regarding the usability of web sites and applications continues to pop up for me. Primary stakeholders ask the question, "Why don't we just send a link to a survey in our newsletter to all our registered users?" or, "Let's add a feedback link or a thumbs up/down clicker to our new beta releases." > > My question is whether this type of polling is valuable or if the information and the sample population is just too broad and unreliable. The cost to review the answers and organize the data doesn't seem extremely high especially if using a survey application. My general opinion is that most of the usability items can be found by testing a very small number of users and the largest issues are already well known by most people that touch the product, especially client account managers. > > We already have a program for soliciting product feedback at the support level. I'd welcome any feedback about the value soliciting usability information from the general user population directly on new pages and functions of the site. > >

12 Aug 2010 - 11:22am
Paul J. Sherman
2010

Just a contrarian point of view to share...

Sometimes it *can* actually hurt to increase the number of feedback channels your organization is listening to. Why? Because all channels are not created equal. Some streams of information can make a tiny problem too salient and blow it out of proportion. This can lead the business to make wrong or inefficient resource allocation decisions. 
Case in point: my wife does voice and speech user interface design. One day a large client whose IVR she had designed and usability tested came to her in a semi-panic:  "Oh noes, we received EIGHT complaints about our new IVR menu!" She calmly inquired how many calls the IVR received that month. The answer? About one million. Then she asked how the zero-out ("dropout") rate had changed since the new system was deployed. Answer: it had decreased by double digits. I think call handle time also decreased because callers who did need agents were navigating the IVR more quickly.
In this case, the company overvalued the eight complaints, and undervalued the actual performance data. So more is not necessarily better. More is sometimes just more. As in more noise.
Regards,
Paul

17 Aug 2010 - 10:58am
bojcampbell
2010

Thank you. I'm concluding that one should embrace any feedback that can be solicited but be very careful to control its interpretation. Tall order.

Syndicate content Get the feed