Validating personas

24 Nov 2009 - 11:25am
4 years ago
16 replies
1682 reads
Angela Colter
2009

I'm hoping to find out if anyone else on the list has done this.

We're currently in the middle of a persona development project. One
of the leaders of the project has expressed a desire to validate the
personas. In other words, conduct a survey of our user base to find
out whether the characteristics illustrated in the personas match our
actual users.

The personas were developed based on field research with about two
dozen customers. I think the goal is to survey a much larger
proportion of our users to make sure the team got it right.

Has anyone surveyed your customers to validate personas? Do you have
any advice on doing so that you'd be willing to share?

Thanks,

Angela Colter
Comcast Interactive Media

Comments

24 Nov 2009 - 9:58pm
zakiwarfel
2004

On Nov 24, 2009, at 8:25 AM, Angela Colter wrote:

> The personas were developed based on field research with about two dozen customers. I think the goal is to survey a much larger proportion of our users to make sure the team got it right.

When crafting personas, we use no less than three separate data points:
* Stakeholders—interviews to determine who they think the audience is and what their behaviors are
* Actual customers—using ethnographic-based field research
* Someone we know—this helps keep us grounded, allows for validation and provides a direct line of access if any questions ever come up

In leu of actual customers, we'll conduct contextual interviews with customer support reps and sales people who have customer touch points. We have used surveys in the past to provide some additional input but not to validate personas. Surveys don't work as well as interviews for extracting actual behavior data. With fully fleshed out personas, you might be able to construct a survey to evaluate personas, but I'd trust actual interviews of people I know more.

Cheers!

Todd Zaki Warfel
Principal Designer, Messagefirst
Author of Prototyping: a practitioner's guide http://bit.ly/protobk
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at zakiwarfel.com
Blog: zakiwarfel.com
Twitter: @zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

24 Nov 2009 - 10:20pm
Stephen Holmes
2009

I had to laugh a little when I first read this; you want to validate
the "make believe". Can you validate the tooth fairy! (TFIC)

I've always believed that a persona is NOT a metric. You can't and
shouldn't measure or baseline it - it is there to guide you in lieu
of more solid research which is often missing because of budget, time
limits or timeframe constraints; as Todd said, "but I'd trust actual
interviews of people I know more."

People can be measured - personas are a framework to guide you and
overlaying them with solid metrics may hinder you in using your
personas effectively.

regards

Stephen Holmes

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=47635

24 Nov 2009 - 11:46pm
livlab
2003

I suspect you might need to ask a different question. It is hard for me
to grasp what can be meant by "validating personas" - it could be a
concern about the appropriateness or usefulness of the outcome of the
research that resulted in the personas created, which I can understand.
Other than that I am not sure what validation would mean.

If that is the concern, I would try and ask myself first what is really
bothering you. Is it lack of detail? Do the personas seem shallow and
non-actionable? Are the stories describing the personas in context
far-fetched? I think those could be symptoms of personas that may not be
expressing the reality of the audience you are designing for and you may
have good instincts about what purpose your personas could/should be
serving. It could indicate the research effort focused on the wrong
(aspect of a broad) audience or the analysis surfaced less important
attributes than it should have.

If the issue is more abstract such as "Is designing with these personas
in mind and for the scenarios of use they are presented in, RELEVANT?"
it might be a hint to some lack of definition about the audience you
intend to serve in the first place - something identified before the
research even started. That could possibly have derailed the research
effort or provided too narrow/broad focus for the research.

For example, if your persona study started out trying to profile a
general/existing customer base instead of trying to represent possible
users of a specific service/product/outcome that fulfills a specific
need, maybe that is a hint that the personas you ended up with just
represent "some" audience, not necessarily the audience you need to be
designing for.

Having said that - and again, sounds like you need to ask different
question - if I were to try and "validate" the relevance or
approprietness of my research effort via personas, the last people I
would ask would be my end users. Personas are a design tool - asking
about the end-user's opinion of persnoas in my mind is akin to asking
them if I should run an agile or waterfal development shop. There is no
context or reason for the end user to know or care to have an opinion.

I am trying to read between the lines here but, if by surveying
customers you mean surveying them to try and have them self-select and
identify to which persona they match, I think that is even more
problematic, because you have personas, not market segments, which could
be used to group people according to certain attributes - personas on
the other hand are not, in my experience, relevant as a method to
categorize groups of existing users - market segmentation is good for
that and that is its reason for existing as an approach.

The best measure of the usefulness and appropriateness of personas in my
mind is in how well they aid designers in doing their job. If your
designers can express how helpful they are being and what flaws or gaps
they encounter as they use them, that would be, to me, the best
actionable feedback you could get.

Angela Colter wrote:
> I'm hoping to find out if anyone else on the list has done this.
>
> We're currently in the middle of a persona development project. One
> of the leaders of the project has expressed a desire to validate the
> personas. In other words, conduct a survey of our user base to find
> out whether the characteristics illustrated in the personas match our
> actual users.
>
> The personas were developed based on field research with about two
> dozen customers. I think the goal is to survey a much larger
> proportion of our users to make sure the team got it right.
>
> Has anyone surveyed your customers to validate personas? Do you have
> any advice on doing so that you'd be willing to share?

25 Nov 2009 - 1:52am
Audrey Crane
2009

I think what you mean here is that you have already created personas
based on real research, and your "client" wants them validated.

I've had clients ask for that too. Often this is their first foray
into qualitative research, and they don't feel comfortable, and want
some quantitative comfort. I know of one company that spent 7 *months*
and lots and lots of time and money to "validate" their personas.
More time than it took to create them in the first place. Yikes.

Tamara Adlin includes several methods for validating personas in her
book, including showing the final personas to future research
participants or even the interviewees themselves and seeing if they
resonate with them.

Another method is to distill the personas into a few key traits,
behaviors or characteristics and use them alongside future research.
For example, is this usability research participant very
Danielle-like with some Molly tendencies, or is she more like Bill?
The risk to this is that people may not understand that if everyone
doesn't exactly and perfectly fit into one persona or the other, it
doesn't mean that the personas are invalid. However if this is used
effectively, the idea that personas are always being tested and
validated is a powerful one.

And of course you could use those key traits to create a survey,
analyze the findings, etc. etc.

BUT... if fundamentally you just have some folks who are
uncomfortable with qualitative research, you might start there. I
usually try to explain that there are many right answers, just like
there are many right ways to sort the change in your pocket, and
these aren't carved in stone so if later you discover (in any way:
traffic data, usability research, whatever) a gap or something
unexplained you can create a new persona or clarify an existing
persona if that's appropriate. I also try to emphasize that
quantitative data is great and very useful and we should keep doing
it, and this qualitative stuff fills a different need. Just as we
don't toss out qualitative data for not doing a good job going
deeply into users motivations, we shouldn't toss quantitative data
for being a small sample size (or whatever the beef is with it).
They're complimentary.

If you can't address that, if it is the fundamental issue, I wonder
if any amount of validation will be enough.

Good luck! I'd be interested to hear how this effort goes for you.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=47635

25 Nov 2009 - 4:44am
Jared M. Spool
2003

On Nov 24, 2009, at 8:25 AM, Angela Colter wrote:

> Has anyone surveyed your customers to validate personas? Do you have
> any advice on doing so that you'd be willing to share?

Hi Angela,

To add to what Livia & Todd have said, which is all right on the money:

I'm wondering, by the way you described your project, if you've
localized your personas to specific functionality. One trap that I see
teams falling into is they try to create personas that describe all
their customers for every possible use of all their offerings. This is
virtually impossible to do well and the end result is that people
start to question things like validity.

If you're doing personas that are specific to a set of functionality
you're developing (which means the individuals you researched were all
likely users of that functionality and the scenarios you've developed
-- you are developing scenarios, right? -- are all tied to the variant
usage of the functions), then validating is actually quite easy:

You start by selecting a new panel of users who are likely users of
the functionality.

Then you take the attributes you used in clustering your original
personas and put them on a scale.

Using an interview (I wouldn't use a survey because it's much, much
harder to get right), you talk to each member of the panel, putting
them on the scale.

When you're done, you should see a very similar clustering pattern to
what you saw in the first set of personas. If you don't, then you now
have a bigger set of data to re-cluster and reconfigure the persona
descriptions.

Hope that helps,

Jared

Jared M. Spool
User Interface Engineering
510 Turnpike St., Suite 102, North Andover, MA 01845
e: jspool at uie.com p: +1 978 327 5561
http://uie.com Blog: http://uie.com/brainsparks Twitter: @jmspool

25 Nov 2009 - 4:53am
Jared M. Spool
2003

I'm going to add that Steve Mulder's book, the User is Always Right,
has some great discussion about using quantitative data to help flush
out your persona descriptions.

http://www.amazon.com/exec/obidos/ASIN/0321434536/?tag=userinterface-20

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=47635

25 Nov 2009 - 9:16am
Paul Sherman
2006

Adding to Jared's point about validating your personas along functional grouping lines...all of which I agree with:

IMO Jared black-boxed this step and left a bit too much implicit: "Then you take the attributes you used in clustering your original personas and put them on a scale."

There are many ways to create a scale poorly, regardless of whether you're interviewing or surveying.

I'm trying to keep this short, so I'll boil down the best practice advice (and I'm sure some of the other recovering social scientists will weigh in here as well):

1. Measure the attribute in more than one way. Ask several questions for each concept.

Ex: If you think your personas vary along a continuum of, say, interest in consuming entertainment content using multiple devices, you should ask several questions to elicit where people fall on this continuum. This helps reduce social desirability bias. ("Of course I use all the fancy features of my DVR! I'm no dummy, I'm on the cutting edge!")

2. Have more than person make ratings.

The thing about quasi-quantitative measures like attitude, attribute or behavioral ratings is that there's lots of noise in the data. BTW I say "quasi-quantitative" because it's not true quant data; it's qualitative data that's transformed into quant-like data. (Basically, it's physics envy.)

Using multiple raters helps reduce rating error. If you have high agreement across raters, you can be more confident that the ratings are accurate.

Ex: When you're interviewing people and assigning your ratings, have another person on your team complete the same ratings. (A third rater would be grand.) Then average the ratings. If you're using binary, presence-absence ratings (there's that physics envy again), check to see if your raters agree. If you're not seeing agreement across raters, this indicates a problem with the item, or a problem with the underlying concept you're trying to measure.

HTH.
-Paul
- - - - - - -
Paul Sherman, Principal, ShermanUX
User Experience Research | Design | Strategy
paul at ShermanUX.com
www.ShermanUX.com
+1.512.917.1942
- - - - - - -

On Nov 25, 2009, at 3:44 AM, Jared Spool wrote:

On Nov 24, 2009, at 8:25 AM, Angela Colter wrote:

> Has anyone surveyed your customers to validate personas? Do you have
> any advice on doing so that you'd be willing to share?

Hi Angela,

To add to what Livia & Todd have said, which is all right on the money:

I'm wondering, by the way you described your project, if you've localized your personas to specific functionality. One trap that I see teams falling into is they try to create personas that describe all their customers for every possible use of all their offerings. This is virtually impossible to do well and the end result is that people start to question things like validity.

If you're doing personas that are specific to a set of functionality you're developing (which means the individuals you researched were all likely users of that functionality and the scenarios you've developed -- you are developing scenarios, right? -- are all tied to the variant usage of the functions), then validating is actually quite easy:

You start by selecting a new panel of users who are likely users of the functionality.

Then you take the attributes you used in clustering your original personas and put them on a scale.

Using an interview (I wouldn't use a survey because it's much, much harder to get right), you talk to each member of the panel, putting them on the scale.

When you're done, you should see a very similar clustering pattern to what you saw in the first set of personas. If you don't, then you now have a bigger set of data to re-cluster and reconfigure the persona descriptions.

Hope that helps,

Jared

Jared M. Spool
User Interface Engineering
510 Turnpike St., Suite 102, North Andover, MA 01845
e: jspool at uie.com p: +1 978 327 5561
http://uie.com Blog: http://uie.com/brainsparks Twitter: @jmspool

________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... discuss at ixda.org
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

25 Nov 2009 - 10:01am
Jared M. Spool
2003

On Nov 25, 2009, at 3:16 PM, Paul Sherman wrote:

> IMO Jared black-boxed this step and left a bit too much implicit:
> "Then you take the attributes you used in clustering your original
> personas and put them on a scale."
>
> There are many ways to create a scale poorly, regardless of whether
> you're interviewing or surveying.

Actually, it was the next step where I recommended interviewing.

And I was glossing over this, in that I recommend that it be less of a
straight Q&A interview and more of a discussion, where you then
extract the answers. This takes a skilled interviewer and a good
transcript analyzer, but those are learned, practicable skills.

If you use a discussion method for your interviews, you won't run into
the bias issues that Paul talked about as often.

Jared

25 Nov 2009 - 4:12pm
Elizabeth Kell
2008

I'm working with Angela - the original poster - on this project.
Please note, I am not the lead for the project that Angela referred
to. With that said:

Jared hit the nail when he posted the link to Steve's book.

We are specifically looking for advice from folks who have experience
with the approach described in Steve's book with his "qualitative
quantitative validation." We've completed the qualitative portion.

Perhaps validate isn't the right word, but the quant approach he
describes in the book is area where we are interested hearing from
others in the community. Does anyone have lessons learned they are
willing to share from their own attempts at Mulder-style quant work,
in particular in crafting and deploying surveys? :)

Thanks,
Liz

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=47635

26 Nov 2009 - 9:26am
zakiwarfel
2004

On Nov 25, 2009, at 1:12 PM, Elizabeth Kell wrote:

> Does anyone have lessons learned they are willing to share from their own attempts at Mulder-style quant work, in particular in crafting and deploying surveys? :)

Good surveys are pretty difficult to design. In a case like this, you'll want to focus the questions on behaviors, not demographics information. You can use something like Survs.com to create a survey using some logic to split people into different paths based on some initial qualifying questions.

Without seeing the actual personas, it's a bit difficult to give you specific information and guidance. But in general, if you're going to take a quant approach to validation, then focus your questions on the behaviors you think apply to each given persona.

Cheers!

Todd Zaki Warfel
Principal Designer, Messagefirst
Author of Prototyping: a practitioner's guide http://bit.ly/protobk
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at zakiwarfel.com
Blog: zakiwarfel.com
Twitter: @zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

26 Nov 2009 - 5:04pm
Jared M. Spool
2003

On Nov 25, 2009, at 1:12 PM, Elizabeth Kell wrote:

> Does anyone have lessons learned they are
> willing to share from their own attempts at Mulder-style quant work,
> in particular in crafting and deploying surveys? :)

I would *highly recommend* you not try to do this with surveys. It's
very easy to create a survey where the participant actually answers a
different question than the one you think you're asking. When that
happens, you can't trust any of the data you've collected, because
you're not doing an apples-to-apples comparison.

Instead, I recommend discussion-based interviews. This is time
consuming, but produces far more reliable data.

Jared

Jared M. Spool
User Interface Engineering
510 Turnpike St., Suite 102, North Andover, MA 01845
e: jspool at uie.com p: +1 978 327 5561
http://uie.com Blog: http://uie.com/brainsparks Twitter: @jmspool

26 Nov 2009 - 5:15pm
Sharon Greenfield5
2008

This can be ratified however by for each data point, ask three
questions for it. Basically the same question three times in a
different way.
Then you can cross calculate from there the validity of data input.

Sure, not as great as field work but...will get the job done safely.

On Nov 26, 2009, at 2:04 PM, Jared Spool wrote:

>
> On Nov 25, 2009, at 1:12 PM, Elizabeth Kell wrote:
>
>> Does anyone have lessons learned they are
>> willing to share from their own attempts at Mulder-style quant work,
>> in particular in crafting and deploying surveys? :)
>
> I would *highly recommend* you not try to do this with surveys. It's
> very easy to create a survey where the participant actually answers
> a different question than the one you think you're asking. When that
> happens, you can't trust any of the data you've collected, because
> you're not doing an apples-to-apples comparison.
>
> Instead, I recommend discussion-based interviews. This is time
> consuming, but produces far more reliable data.
>
> Jared
>
> Jared M. Spool
> User Interface Engineering
> 510 Turnpike St., Suite 102, North Andover, MA 01845
> e: jspool at uie.com p: +1 978 327 5561
> http://uie.com Blog: http://uie.com/brainsparks Twitter: @jmspool
>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... discuss at ixda.org
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help

27 Nov 2009 - 2:57am
Jared M. Spool
2003

On Nov 26, 2009, at 11:15 PM, live wrote:

> This can be ratified however by for each data point, ask three
> questions for it. Basically the same question three times in a
> different way.
> Then you can cross calculate from there the validity of data input.

Three stupid questions produces three stupid data points. You don't
know if any of the questions were understood by the person answering
them. And you can't tell if you're asking questions that people
understand without a lot of up-front validation of the survey itself.

So, no, that technique won't work.

Jared

27 Nov 2009 - 3:25am
Sharon Greenfield5
2008

Regardless of how many times you use the word 'stupid' Jared, this is
still standard procedure for all university social science programs. :)

On Nov 26, 2009, at 11:57 PM, Jared Spool wrote:

>
> On Nov 26, 2009, at 11:15 PM, live wrote:
>
>> This can be ratified however by for each data point, ask three
>> questions for it. Basically the same question three times in a
>> different way.
>> Then you can cross calculate from there the validity of data input.
>
> Three stupid questions produces three stupid data points. You don't
> know if any of the questions were understood by the person answering
> them. And you can't tell if you're asking questions that people
> understand without a lot of up-front validation of the survey itself.
>
> So, no, that technique won't work.
>
> Jared
>
>

27 Nov 2009 - 7:54am
zakiwarfel
2004

That sounds really, really annoying. As someone who's taken probably a few hundred surveys, I can't stand when they do that and often bail. It's a real waste of my time.

On Nov 26, 2009, at 5:15 PM, live wrote:

> This can be ratified however by for each data point, ask three questions for it. Basically the same question three times in a different way.

Cheers!

Todd Zaki Warfel
Principal Designer, Messagefirst
Author of Prototyping: a practitioner's guide http://bit.ly/protobk
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at zakiwarfel.com
Blog: zakiwarfel.com
Twitter: @zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

27 Nov 2009 - 7:55am
zakiwarfel
2004

On Nov 27, 2009, at 3:25 AM, live wrote:

> Regardless of how many times you use the word 'stupid' Jared, this is still standard procedure for all university social science programs. :)

Oh, well then that must mean that it works ;).

Cheers!

Todd Zaki Warfel
Principal Designer, Messagefirst
Author of Prototyping: a practitioner's guide http://bit.ly/protobk
----------------------------------
Contact Info
Voice: (215) 825-7423
Email: todd at zakiwarfel.com
Blog: zakiwarfel.com
Twitter: @zakiwarfel
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

Syndicate content Get the feed