Asking questions to participants in a positiveor negative way ?

21 May 2008 - 10:27am
6 years ago
10 replies
315 reads
Caroline Jarrett
2007

From: "chiwah liu" <chiwah.liu at gmail.com>
:
: I don't know if I am right, but for me, the "neutral" option depends on the
: number of users :
: - If we don't have enough user to reach a statistical significance (let's
: say less than 100 users) for our survey, we should add a "neutral" option.
: The users who don't have any idea can bias the survey.
:
: - Now if we have enough user to reach a statistical significance (200-300+
: users), we can force them to choose because they should give a random
: answer. That mean if my scale is between 1 and 4, I should have the same
: number of users that answer 2 than those who answer 3. If this case happens,
: then I can suppose that users don't really have idea about the answer.
: Otherwise, they might have preferences and it shouldn't be biased because it
: is be statistically significant.
:
:
No. I think the phrase 'force them to choose' shows exactly why this is a bad idea.

You ought to allow users to have the opinions that they have - even if those opinions include 'don't know' or 'don't care' (or
both).

The answer options you offer should depend solely on the answers that your users want to give - not upon how many users there are.

If you don't know what answers your users want to give, then interview them to find out before running your survey. And by the way -
you should do that anyway (i.e., interview some users first) if you want anything like good results from your survey.

There's a longer version of my views at:
http://www.usabilitynews.com/news/article1269.asp

Best
Caroline Jarrett
caroline.jarrett at effortmark.co.uk

Comments

21 May 2008 - 8:24pm
Chauncey Wilson
2007

Caroline makes some very good points. Questionnaire design is complex
and there are hundreds of articles debating the use of mid-points, the
meaning of a mid-point, and other topics like how the order of
questions influences answers. For many surveys, a Don't Know, Don't
Care, or I Don't want to Answer (say to salary surveys or personal
information) are all items that should be considered. If you are
writing a questionnaire for a survey on a topic that you don't know
well, doing some research beforehand to create the response categories
is quite important so you don't have a lot of answers to your "Other"
response category.

There are several excellent books that delve into the issues of bias
and the many design issue that you need to consider. I would
recommend:

Robson, C. (2002). Real-world research (Second edition). Malden, MA:
Blackwell Publishing. This book describes many methods for gathering
data including an excellent section on scale and questionnaire design.
The book has a short, but excellent description, for example about
how to develop Likert items.

Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about
answers: The application of cognitive processes to survey methodology.
San Francisco, CA: Jossey-Bass. Thinking About Answers explores
cognitive issues associated with survey methods. These issues include:
context effects in surveys, order effects, event dating, counting and
estimation, and autobiographical memory. The final chapter summarizes
implications of cognitive research for survey design, administration,
and interpretation.

Dillman, D. A. (2007). Mail and internet surveys: The tailored design
method 2007 update with new internet, visual, and mixed-mode guide.
New York, NY: Wiley. This book is the third by Dillman who has
written the most general book of survey guidelines.

Aiken, L. R. (2002). Attitudes and Related Psychosocial Constructs:
Theories, assessment, and research. Thousands Oaks, CA: Sage
Publications. There are many books in social psychology that get into
scale development. It is worth getting a book like Aiken or another
book to understand the issues with Likert scaling, Semantic
Differential scales, odd versus even scales, whether to label each
scale point or only the end points.

Chauncey

> No. I think the phrase 'force them to choose' shows exactly why this is a bad idea.
>
> You ought to allow users to have the opinions that they have - even if those opinions include 'don't know' or 'don't care' (or> both).
>
> The answer options you offer should depend solely on the answers that your users want to give - not upon how many users there are.
>
> If you don't know what answers your users want to give, then interview them to find out before running your survey. And by the way -
> you should do that anyway (i.e., interview some users first) if you want anything like good results from your survey.
>
> There's a longer version of my views at:
> http://www.usabilitynews.com/news/article1269.asp
>
> Best
> Caroline Jarrett
> caroline.jarrett at effortmark.co.uk
>
>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help
>

22 May 2008 - 6:10am
Chauncey Wilson
2007

I would consider Dillman to be the best overall set of guidelines for
survey and questionnaire design and implementation. Dillman includes
the processing of writing cover letters, recruiting respondents, and
other issues.

Chauncey

On Thu, May 22, 2008 at 6:47 AM, chiwah liu <chiwah.liu at gmail.com> wrote:
>
>
> 2008/5/22 Chauncey Wilson <chauncey.wilson at gmail.com>:
>>
>> Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about
>> answers: The application of cognitive processes to survey methodology.
>> San Francisco, CA: Jossey-Bass. Thinking About Answers explores
>> cognitive issues associated with survey methods. These issues include:
>> context effects in surveys, order effects, event dating, counting and
>> estimation, and autobiographical memory. The final chapter summarizes
>> implications of cognitive research for survey design, administration,
>> and interpretation.
>>
>> Dillman, D. A. (2007). Mail and internet surveys: The tailored design
>> method 2007 update with new internet, visual, and mixed-mode guide.
>> New York, NY: Wiley. This book is the third by Dillman who has
>> written the most general book of survey guidelines.
>>
>> Aiken, L. R. (2002). Attitudes and Related Psychosocial Constructs:
>> Theories, assessment, and research. Thousands Oaks, CA: Sage
>> Publications. There are many books in social psychology that get into
>> scale development. It is worth getting a book like Aiken or another
>> book to understand the issues with Likert scaling, Semantic
>> Differential scales, odd versus even scales, whether to label each
>> scale point or only the end points.
>>
>>
>> Chauncey
>
> Thank you for the books you recommended me. Is there a book that is
> particularly valuable? (Because I am not sure I could buy all these books.)
>
> I already have some knowledge about survey and psychometrics so I prefer
> like a book that go into detail.
>
>
>
> Best,
>
> Chiwah

22 May 2008 - 5:47am
Anonymous

2008/5/22 Chauncey Wilson <chauncey.wilson at gmail.com>:

> Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about
> answers: The application of cognitive processes to survey methodology.
> San Francisco, CA: Jossey-Bass. Thinking About Answers explores
> cognitive issues associated with survey methods. These issues include:
> context effects in surveys, order effects, event dating, counting and
> estimation, and autobiographical memory. The final chapter summarizes
> implications of cognitive research for survey design, administration,
> and interpretation.
>
> Dillman, D. A. (2007). Mail and internet surveys: The tailored design
> method 2007 update with new internet, visual, and mixed-mode guide.
> New York, NY: Wiley. This book is the third by Dillman who has
> written the most general book of survey guidelines.
>
> Aiken, L. R. (2002). Attitudes and Related Psychosocial Constructs:
> Theories, assessment, and research. Thousands Oaks, CA: Sage
> Publications. There are many books in social psychology that get into
> scale development. It is worth getting a book like Aiken or another
> book to understand the issues with Likert scaling, Semantic
> Differential scales, odd versus even scales, whether to label each
> scale point or only the end points.
>
>
> Chauncey

Thank you for the books you recommended me. Is there a book that is
particularly valuable? (Because I am not sure I could buy all these books.)

I already have some knowledge about survey and psychometrics so I prefer
like a book that go into detail.

Best,

Chiwah

23 May 2008 - 7:53am
Anonymous

2008/5/22 Chauncey Wilson <chauncey.wilson at gmail.com>:

> I would consider Dillman to be the best overall set of guidelines for
> survey and questionnaire design and implementation. Dillman includes
> the processing of writing cover letters, recruiting respondents, and
> other issues.
>
> Chauncey
>
>
Thank you, I think I am going to buy this book.

Chiwah

23 May 2008 - 12:21pm
Anonymous

> You ought to allow users to have the opinions that they have - even if
> those opinions include 'don't know' or 'don't care' (or
> both).
>
> The answer options you offer should depend solely on the answers that your
> users want to give - not upon how many users there are.
>
> If you don't know what answers your users want to give, then interview them
> to find out before running your survey. And by the way -
> you should do that anyway (i.e., interview some users first) if you want
> anything like good results from your survey.
>

Do you mean that when a user chooses "neutral" for a question, it has a
meaning? And if most of my users choose "neutral", it means that my question
is wrongly formulated? Then in both case should I interview them to know why
they choose the "neutral" option?

But in this case, does that mean that I should include for each question a
checkbox asking if they don't care, don't know and if they felt sometime one
aspect or another?

Best,
Chiwah

25 May 2008 - 9:05am
Caroline Jarrett
2007

I wrote:

:> You ought to allow users to have the opinions that they have - even if
: > those opinions include 'don't know' or 'don't care' (or
: > both).
: >
: > The answer options you offer should depend solely on the answers that your
: > users want to give - not upon how many users there are.
: >
: > If you don't know what answers your users want to give, then interview them
: > to find out before running your survey. And by the way -
: > you should do that anyway (i.e., interview some users first) if you want
: > anything like good results from your survey.
: >
:
And Chiwah asked:

: Do you mean that when a user chooses "neutral" for a question, it has a
: meaning? And if most of my users choose "neutral", it means that my question
: is wrongly formulated? Then in both case should I interview them to know why
: they choose the "neutral" option?
:
: But in this case, does that mean that I should include for each question a
: checkbox asking if they don't care, don't know and if they felt sometime one
: aspect or another?
:
Possibly. It is definitely the case that users choose 'neutral' for many reasons other than that they are neutral.

It is also definitely the case that you should interview users on the topics that you want to survey. Surveys aren't a way of
finding out users' opinions. They are a way of finding out how opinions are distributed in a population. If you choose the wrong
opinions to ask them about, you will get poor results.

For example, a classic way to get poor results is to ask users a series of questions on a topic that they don't know about or don't
care about.

But it's not necessarily a good idea to include specific checkboxes for 'don't know' and 'don't care' with _each_ question. It might
be that they don't know or don't care about the whole topic. It might be that don't know is a commonly held view for some of your
questions, with don't care being rare - but that don't care is common for other questions, and don't know is rare. It might be that
'do have a view but don't want to give it to you' is a common opinion.

The only way to find out is to interview some users to get a feeling for the types and ranges of opinions that they do have. Then
you construct your questions. Then you test your questionnaire, and interview the test participants about it. By this point you have
a good chance of getting a decent questionnaire put together and that's half the battle of a survey.

(The other half of the battle of a survey is deciding what you want to find out about in the first place, getting a good sampling
strategy, analysing the pilot and actual data, and reporting, and doing something about what you find).

Aside: one classic mistake people make is to think: "we don't have time to do any face-to-face user research such as usability
testing or field studies - we'll survey them instead'. But in fact, a good survey is at least 10, and sometimes nearer 100, times
harder and more time-consuming than a few field studies or a bit of usability testing.

A second aside: I use the term 'survey' to mean the end-to-end process of gathering user data or opinions using a predetermined set
of questions, including the process of deciding what the questions should be. I use 'questionniare' to mean the predetermined set of
questions itself. Many survey methodologists use the term 'instrument' instead of 'questionnaire'.

Best,
Caroline Jarrett
caroline.jarrett at effortmark.co.uk
07990 570647

Effortmark Ltd
Usability - Forms - Content

We have moved. New address:
16 Heath Road
Leighton Buzzard
LU7 3AB

26 May 2008 - 5:22am
Anonymous

Caroline said :

The only way to find out is to interview some users to get a feeling for the
> types and ranges of opinions that they do have. Then
> you construct your questions. Then you test your questionnaire, and
> interview the test participants about it. By this point you have
> a good chance of getting a decent questionnaire put together and that's
> half the battle of a survey.
>

Thank you for your answer. Our marketing department, with whom I am trying
to work with doesn't do any one-on-one user research before creating a
questionnaire. They just ask client what they want to be measured,
reformulate it and the questionnaire is done!

For the test, they just give it to us and we have to validate it… Which is
not really a test..

Doing one on one user research first could be very time-consuming, what
argument could I say to prove to both the client and the marketing team that
it worth the value ?

Best,
Chiwah

26 May 2008 - 6:53am
Caroline Jarrett
2007

Caroline said :

> The only way to find out is to interview some users to get a feeling for the types and ranges of opinions that they do have. Then
> you construct your questions. Then you test your questionnaire, and interview the test participants about it. By this point you
> have
> a good chance of getting a decent questionnaire put together and that's half the battle of a survey.
>
Chiwah replied:

: Thank you for your answer. Our marketing department, with whom I am trying to work with doesn't do any one-on-one user research
before creating a questionnaire. They just ask client what they want to be measured, reformulate it and the questionnaire is done!

: For the test, they just give it to us and we have to validate it… Which is not really a test..

: Doing one on one user research first could be very time-consuming, what argument could I say to prove to both the client and the
marketing team that it worth the value ?

You are where you are. I'd consider incorporating the marketing questionnaire as part of the test. I'd ask the participants to fill
in the questionnaire for me but get them to explain to me, question by question, what the question meant to them, why they were
picking the answers, if they felt the question was appropriate and whether you should have asked any different questions. Video it
all.

Maybe your marketing department is correct, in which case you'll get plenty of good material to flatter them with in the future and
that will all help the working relationship. Maybe they aren't as correct as they hope, in which case you can go back to them and
say: "Your questionnaire was great but we did have these minor difficulties with it here, here and here. Maybe next time we could do
a couple of interviews with users first of all?"

Warning: even if it's true, avoid going back to the marketing department with a message like: "I told you your questionnaire
approach was all wrong and here's the evidence to prove it". That's a recipe for defensiveness, rejection, and all sorts of other
bad stuff.

As for time-consuming: it never ceases to amaze me that I meet such resistance to doing even a couple of informal interviews with
users (say, half a day max) whereas organisations think nothing of sending out 1000 questionnaires just like that. Or even sending
questionnaires to all their users!!! Strange, isn't it?

Caroline Jarrett
caroline.jarrett at effortmark.co.uk
07990 570647

Effortmark Ltd
Usability - Forms - Content

We have moved. New address:
16 Heath Road
Leighton Buzzard
LU7 3AB

26 May 2008 - 7:19am
Chauncey Wilson
2007

Caroline's suggestion about doing a think-aloud study of the
questionnaire is excellent. I routinely do this as part of the
questionnaire design process. I ask them to read it aloud and give me
feedback about meaning, bias, instructions, wording, terminology, and
anything else that comes to mind. For items with un-ordered response
categories (for example job title), you might find that you are
missing a key item. These think-aloud sessions can be short, say 15-30
minutes. Though it is a bit harder, you can do this over the phone or
though remote collaboration software like GoToMeeting or LiveMeeting
or other systems where you can display a question and hear the person.

If you are using an electronic survey, you might get feedback about
navigation, going back, required fields, etc. For example, I recently
reviewed a survey where a single question listed about 10 fields for
address,phone, etc. It turned out that all fields were required, even
the one that was Addresss 2 which many people would not need to fill
out. When the participant ignored that field and got a warning, it
was confusing so he eventually typed some junk into Address 2 and
could proceed. The software allowed you to make individual fields
within a single question required or not which is good, but that
feature is buried.

Your test of the questionnaire might reveal missing categories or the
wrong time reference or frequency responses. If you were asking about
a CRM or financial system and your last response was "I use this a few
times a day", your interview might reveal something like "Wow, I use
this features 50-100 times a day". If you find out that a number of
respondents make REALLY heavy use of a system, that is critical design
input and may affect features for expert, high-frequency users. A few
times a day is much different than 50-100 times a day.

When you test the questionnaire (with people who are as close to those
you will be sampling), watch for pauses and facial expressions and ask
what people where thinking or what caused "that smile" or "frown".

Having people read through a survey line by line and give you feedback
is a variation on usability testing called the user edit or usability
edit which is not too well known, but a powerful way to get feedback
on procedural documentation (and questionnaires). Here are some
references to the user edit method:

Atlas, M. (1981). The user edit: Making manuals easier to use. IEEE
Transactions on Professional Communication, 24:1 (March): 28-29.

Atlas, M. (1998). The user edit revisited, or "if we're so smart, why
ain't we rich?". Journal of Computer Documentation. 22:3 (August). ACM
Press: New York, NY. 21-24.

Schriver, K. A. (1991). Plain Language for Expert or lay audiences:
Designing text using User Edit. (Technical Report Number 46)
Pittsburgh, PA: Carnegie Mellon University, Communications Design
Center.

Soderston, C. (1985). The user edit: A new level. Technical
Communication, 1st Quarter, 16-18.

Chauncey

>

26 May 2008 - 3:52pm
Anonymous

Caroline said:

>
> As for time-consuming: it never ceases to amaze me that I meet such
> resistance to doing even a couple of informal interviews with
> users (say, half a day max) whereas organisations think nothing of sending
> out 1000 questionnaires just like that. Or even sending
> questionnaires to all their users!!! Strange, isn't it?
>

Hmmm... I work in a web agency and we are always very short both in time and
in money. We choose almost every time a quick and dirty method to do our
user research because our clients have never enough money. They also need to
get the website very fast because the advertising campaign is going to
happen within few months…

For example, now if our client wants to test our prototype, we might have
only about one month (or less) to do a whole user testing: survey the user,
find them, call them, create scenarios, do user testing on about 20 users
and of course give them the result and recommendation in less than one
month… Because the advertising campaign is coming soon and we can't be late
on schedule.

So now I am so used to very quick and dirty user research that it rather
sounds normal to choose the dirtiest and fastest method than the opposite.

Syndicate content Get the feed