Changing role of user research in the UX design process?

16 Feb 2013 - 4:06am
1 year ago
16 replies
12255 reads
Mark Vanderbeeken
2006

During the Interactions 13 conference in Toronto, Paul Adams of Facebook advocated a radically different development cycle than the "ideal" user-centered one (qualitative user research first, then modelling and concept development, followed by prototyping, testing, and iteration). Instead he proposed a much faster cycle, which discards the initial user research entirely: first quick hypothesis development (based on data mining, heuristic knowledge and culling from existing behavioral research), followed by immediate prototyping, development and implementation, and then a series of fast iterations through A/B testing.

We heard similar comments from other companies and consultants - particularly in Silicon Valley - where people tell us that conventional user research is too slow and too expensive now that they have all these data that they can unleash algorithms on.

We want to open this observation to the community and discuss with you if you have heard or experienced similar assessments on the role of UX research, and in what fields in particular. Is a new trend emerging (or an old one resurging)? Are some of us changing our UX research offering in response to that? Or is it a false problem, based on unrealistic expectations on what data mining and algorithms can achieve? In any case, how do you then react to your clients (or management) asserting this new way of working?

Mark Vanderbeeken

Comments

16 Feb 2013 - 8:20am
fj
2010

They'll get statistics this way, but no insight behind them.

Which explains why Facebook gets the small flow things right, but is unable to make a big change that doesn't make them look like they have no idea what their users actually feel.

16 Feb 2013 - 9:30am
Dave Malouf
2005

Yes, Silicon Valley and in general startup culture arising from Silicon Valley in the last 5 yrs is devaluing the long term over the short term. The short term results they are getting are crucial to short term successes, but a deeper view is also necessary for organizations. 

But why does this need to be an either or. In the physical world (or hybrid world) there are always two stages of production that many organizations call horizon 1 and horizon 2, where 1 is the here and now and 2 is the maybe. Why can't these go on in parallel in mid to large size organizations. FB has no excuse given their resources. Of course, startups don't have that kind of scale to work with and thus deciding on the short-term over the long term makes sense, except for one thing. They don't have data. Nothing worthwhile, until they have at least a customerbase in the hundreds of thousands. And this is the problem with deriving hypothesis from data. Way too often it is a false lens. And it is definitely near-sighted in its gaze and lacks predictive power.

But yes, the trend is there and as many proponents of Agile and Lean methods in these locales and contexts would say, "It's here, so we are just trying to deal with it". I offer an alternative perspective. You have a choice of where you work, and YOU have influence and power as well, especially in this market place. If you really believe that  you can gain good insights and long term thinking/strategy from the above described methods, kudos. Go for it. I for one don't want to work in an environment that is short-sighted and myopic in the here and now and devalues the infinite treasure that is human contextually based contact and observation. 

I worry about the software industry taking this sever pendulum swing away from empathy (I don't care what you tell me but you can't have empathy derived from faceless data) and back towards an engineering, scientific, analytical approach to making software design decisions.

16 Feb 2013 - 11:36am
Jack L. Moffett
2005

There are different tools for different tasks. As Dave points out, data-driven design requires that you have a lot of people using your software. I'm not designing websites for use by millions or even thousands of people. I'm designing specialized applications installed on secure networks for domains that, prior to starting the projects, I've had no knowledge of. There's a big difference between tweaking an existing product to optimize for a particular business goal and building new tools for specific job roles. A/B testing won't help you invent something, and using a quick hypothesis as the basis for a new tool in a field you don't yet understand is a great way to fail. I'm not talking about the "fail fast to succeed sooner" kind of fail, but the "I looked like an idiot and my customer fired me" kind of fail.

16 Feb 2013 - 11:48am
ambroselittle
2008

Designing software without understanding who your users are/ will be is like stumbling around in the dark. You may get through it just fine without incident. More likely you'll stub your toes and bang up your knees, and there's even the chance you'll fall off the cliff.

16 Feb 2013 - 12:58pm
Deidre Kolarick
2013

We’ve taken a strong interest in the growing profile of lean research methods described above. For companies like Facebook with heaps of user data, that type of cycle might be feasible (though I agree with fj’s comment about Facebook getting small things right but often missing the bigger picture of the user experience). Most of our clients don’t have access to that type of data – or if they do, it’s unwieldy, spread across many divisions, and thus impractical to be used for any kind of upfront product development. It’s often a struggle for us to even get our hands on that data to inform our own research efforts. So we haven’t seen any major shift towards eliminating initial user research in favor of data mining.

The bigger challenge we’re working to address is how to present a range of approaches to research that allows clients to get some user insight, no matter their budget or timeline. Clients that believe in the value of research may not be able to afford as much of it as they might want or may not have the time to spare in the product development cycle. Consequently we now offer a wider range of user research methods, such as lean user testing (less formal, quicker turnaround time, scaled down deliverables) and quantitative online research (fast timeline, diverse sampling).

While data mining might be a nice complement to user research, it’s no substitute - especially because so much of that black box-type data is a challenge to interpret. What we're trying to communicate more now is that research doesn’t have to be complicated, time-consuming, or expensive to be effective.

16 Feb 2013 - 4:28pm
gareth.morgan
2012

Evolutionary approaches (which is effectively what lean and agile approaches are in the absence of Design) do allow fast iterations in products and services that can tolerate rapid release cycles. But evolution also produces a lot of dead ends. And each dead end has a cost. The whole idea of Designing something is to avoid the evolutionary dead ends as cost effectively as possible. And good luck applying a lean iterative approach with, for example, revisions to firmware on a product for secure or government applications - certifying and deploying each update can take six months or more.

18 Feb 2013 - 5:03am
Adrian Howard
2005

 

Evolutionary approaches (which is effectively what lean and agile approaches are in the absence of Design) do allow fast iterations in products and services that can tolerate rapid release cycles. But evolution also produces a lot of dead ends. And each dead end has a cost. The whole idea of Designing something is to avoid the evolutionary dead ends as cost effectively as possible.

On the flip side of that - if you involve design with the evolutionary development approaches you turn a risky only-have-one-chance-to-get-the-design-right process to a less risky lots-of-opportunities-to-fix-things-if-we-discover-a-mistake process. Design and evolutionary approaches are not opposite ends of the continuum. You can do both.

And good luck applying a lean iterative approach with, for example, revisions to firmware on a product for secure or government applications - certifying and deploying each update can take six months or more.

Yes. You would certainly never see agile approaches used for firmware design by Intel (http://agile2004.agilealliance.org/files/XR2-3.pdf) or HP (http://www.informit.com/articles/article.aspx?p=1994803) for example ;-)
Cheers,
Adrian

 

22 Feb 2013 - 12:32pm
mustefa
2011

Always good to see true agile design and development evangelists. 

17 Feb 2013 - 12:56am
Phillip Hunter
2006

No UX research or design methodology has ever been one size fits all, especially methods from rarified atmospheres of large or specialized companies like Facebook. I didn't hear Paul's talk, but the way you're describing it sounds a little like the typical mistake of "It works here, so it should work elsewhere, too." 

This logical error overlooks so many other factors that might be the reasons for any apparent success, so, as Dave, Jack, and Ambrose all imply, a research and design team examining the adoption of this or any other method needs to turn a critical eye to what will be in play within their own organization. 

To address what's behind this question, though, yes, UR as a whole is evolving and diversifying to satisfy the rapid growth in the use of the information and insights it provides. Multiple teams here inside Microsoft have been exploring new ways of multitracking UR along the lines of what Dave talks about. In fact, one of our researchers has developed a very interesting and so-far robust method for marrying long- and short-term research with agile development and involving nearly the entire product team along the way. She'll be presenting a paper about it at CHI in Paris. It looks very promising and impressive, although may very well be something that works best in fast-moving environments with a large team working on a mature product with a specific release cadence. Among other factors. :)

17 Feb 2013 - 9:09am
Sean Pook
2008

Folks are often vocal about the lack of user research uptake or utilization, but not so many are talking about what is happening or advocating what is happening (less and less research) so it was interesting to see this being so openly discussed at the conference.

Aside from the fact there are so many terms used for UX work, people are uncomfortable admitting that the user has less of a place in UCD than the designer or the data. Most organizations adopt a DCD approach (insert data or designer as appropriate) and less than one in fifteen of the roles I recruit for are research focussed. This is either an accurate reflection on the ideal ratio of researchers to designers in the market place, or it's a sign that firms aren't investing in research.

As mentioned before by many, people say it's too expensive, the client doesn't want it, or we don't think we need it as to why research isn't an integral part of the cycle. Different opinions are rife and there's little industry consensus, which is worrying for an industry which isn't that immature anymore when you think about it?

As a last comment….A prominent and well respected UX designer once said that UX design decisions are based on gut feelings and not data or research. 

17 Feb 2013 - 4:58pm
Josh Seiden
2003

I did hear Paul's talk, and was interested to see that the cycle he described has found a home in Facebook. It's very much how I see teams in his context working these days, and I think it's (one of many) good ways to work. Some observations:

1. It's never been easier to build and ship things. It's never been easier to roll them back. For the "cost" of a prototype, you can roll a feature into production for a small percentage of users and observe the results.

2. People are responding as if this is a quant / vs. qual conversation, or that this is some kind of threat to empathy. That's a mis-reading IMO. With a feature built and deployed, teams collect data in many forms--both quant and qual. When you have running features, you actually have data, and you'd be stupid to ignore the numbers, but no-one says you can't do small-sample qualitative research on the feature. Many companies I know are doing just this: they set up standing research days each week and bring users in to the office to show them new features and talk to them about whatever.

3. Traditional "big research up front" doesn't create broad-based empathy on a product team. Instead it creates a locally optimized knowledge base--the designers and researchers who do the research end up being a bottleneck for knowledge about the user. It also doesn't work well in an agile rhythm. Continuous, lightweight, cross-functional collaborative research methods address this need.

4. There is nothing about this process that is contracdictory with long term research and understanding. In fact, continuous small scale research probably results in more overall research being done, and probably builds a deeper, richer empathy than the way we used to work. It's great to be able to do a month of immersive research, but when that phase is done, you have to turn to something else--how do you keep that knowledge alive and up-to-date? You need to find a way to make research sustainable, and not treat it like a special event. 

JS

18 Feb 2013 - 10:19am
Dave Malouf
2005

First, I want to say, what you have said above is awesome. I do think that in many contexts what you describe above is really great. I don't think it works in every context. e.g. if I was designing iOS (or other platforms) that assumption of "roll-back ease" ain't so true any more. But in general I would caution against using "roll-back" at all, as there is a brand hit every time you do that. But there are a gazillion contexts where what you propose is great.

I do think, you bring a lot of yourself to the words you are using above though (as we all have). So in many ways unless you know the person it is hard to understand the words. When I read the beginning of this thread, it is actually clear that this is a qual/quant issue. What Mark was describing was clearly devoid of direct observational techniques in the description. Now if that is not the case, great. But there is something I would challenge in your piece. I think that many short bursts of observation can never make up sustained ethnography, or as you propose, may even be better. By it's definition ethnography is contextual and as any anthro/soc person will tell you, to understand context takes depth because what in essense we are trying to do is learn language scapes. No one can learn anyone else's language in short bursts. It is not possible. I think there is a huge danger in the shallowness of the approach you are suggesting, in that it makes us feel like we know a lot more than we really do know.

On the flip side, what I do like about your approach is the team building part of it. That it getting the team to be a part of the process is definitely an imperative or more correctly, finding ways to bring the empathy beyond the design researcher to the entire team is the imperative. There are possibly other ways to do this outside of direct contact, namely through narrative modelling of observed analyzed data. By placing personas as characters in complex narratives (beyond the simple scenarios) you can go a long way to bring team engagement and empathy to the fore.

My question in all this talk, is what problem are we trying to solve? Is depth too slow? if too slow? why? what is the race? Alan Cooper and Bill Buxton have again and again derided the "first to makret" speed race that so many of us are up against. That best to design is better than first to design. 

But some clarifications:

a. I'm all for balanced team approaches that are highly collaborative so long as the goal of collaboration is to get designs better, not just designs implemented faster.
b. There is more than just cursory (2nd fiddle) tokenized speak of ethnography as a key tool for design research
c. Speed is a consequence of better processes, but not a driving factor outside of the REAL constraints of a project (i.e. the project has a real event tied to it; not a financial reporting call).
d. Data is never ignored, nor is it used as an end-all, be-all of design decisions. At a certain point, it stops becoming design, b/c all serendipity is lost in the process.

I'll stop there. (GREAT discussion Mark. Thank you for adding this post conference conversation here.)

-- dave

22 Feb 2013 - 10:44am
padday
2013

Hey folks, seeing as I started this I better contribute :) First a story, then some clarifications about what I said (or meant to say), then some clarifications about the role of research at Facebook, then some of my own thoughts (not necessarily those of Facebook or researchers who work there).

Some people may disagree with large parts of this, others around the details. I'm simply sharing what I've learned thus far in the hope that others will try new things and further our profession. I'm not fully right, but I know some things don't work because I made the mistakes myself.

A story that changed how I view user research

I worked as a UX researcher for about 6 years (4 at Google). When I joined Facebook I started by doing some qualitative research (1:1 interviews) and something transformational happened to me. After doing the research, I ended up transitioning to be the Product Manager for the exact area that I had done research for, and was suddenly in the position of acting on the recommendations I had made. The surprise to me was that I decided not to act and do something different. The change in job function had completely changed how I viewed prioritisation of work with limited resources. More than that it changed how I viewed research in the product development process. After years of advocating for lots of research early and often in product development, I suddenly started to see the cons of doing research, and started to realise that I was not thinking holistically enough about the problem I was working on. This is broadly true of our profession, I feel that very ironically, we do not have enough empathy for the people we work with and how they must prioritise the world.

What I tried to say at Interaction 13

- I was speaking specifically about Social Design, or executing research for products/services that are built around identity and social interaction. As the web evolves towards people rather than content as the central organising principle, 'social design' really covers almost all consumer facing products/services.

- There were two main points:

1. There is a huge body of existing research out there and people shouldn't be jumping in to do primary research without a systematic study of what others have covered. In my experience, it is highly likely that you just need to do a great literature review to come away with more actionable conclusions than any one independent new study could hope to achieve. The fields within social science publish more than any one individual could keep up with. Synthesis it, and build a hypothesis. You don't need to do new primary research unless you have a niche product or specific use case which is beyond common social interaction.

2. In my experience, no matter how hard you try, how deep you go or how rigorous you are, it is impossible to accurately predict how people will use social systems. Emergent behaviour will occur. So it is critical to get something launched to the public as fast as possible to start learning, and critically, start adapting. You’re not dealing with human-computer interaction, you’re dealing with human-human interaction and that is incredibly complex and nuanced. No isolated qualitative research project could make meaningful conclusions about social behaviour. Usability maybe, but not social behaviour. To get any real insight into users’ perceptions and behaviour, you need real data in any prototype you build. Real friends, real content, real relationships and interactions. Faking it or glossing over this is not a good use of time or resources.

- I'm not saying primary research doesn't have a role, it does, but usually it is not the first thing that needs to be done for products/services that are built around identity and social interaction.

- Times have also changed. With access to lots of APIs, the ability to build on other company's platforms and leverage their technology stack, and much faster coding environments, you can often build a working prototype faster than it would take to detail out all the flows in wireframes. It is often faster to build on a hypothesis, ship publicly, and adapt.

Research at Facebook

- Facebook has a large research team, in fact many different research teams, and takes both qualitative and quantative research very seriously. There is a UX research team which uses many of the same methods as others in the industry.

- There isn't 'one way' to do research at Facebook, nor should there be 'one way' to do research anywhere.
Some of my opinions- This isn't a popular thing to say, but most UX research I have seen (and indeed conducted myself) is a process of risk mitigation. Looking for problems and making recommendations to remove them. For most projects, this is a slightly outdated model of software development because it is now so easy to build, ship and learn. People are too afriad to have something that few people use when they launch it. But very rarely does any product/service launch and see mass adoption very quickly. There is a fallacy around how successful products are built.

- If I were to run a research team (I don't), I would set up two complimentary tracks of work. One goes longer, deeper, systematically trying to understand behaviour over time and informing medium to long term strategy. The other would be very tactical and incredibly fast and would be partnering deeply with analytics teams to answer the 'why' behind all the 'what' and 'how' they are seeing.

1. Formative research that runs as a separate track sitting alongside product development. Certainly they are aligned but the research track is designed to answer ongoing questions about how technology is affecting communication patterns. It's not designed to validate product directions or answer questions around whether people want or need a new product/service/feature. You then need the people running these research tracks to be excellent communicators in order to ensure that the observations are being translated into the product direction and execution.

2. Usability studies that test comprehension. When designing critical aspects of a social service, for example privacy controls, detailed usability studies are incredibly valuable to test people's comprehension of what is going on. But for many parts of a product, they are not on the critical path and are not a launch blocker.

So I should also stop there...I could go on for days... :)

22 Feb 2013 - 11:27am
Christine Boese
2006

Posting here, cuz email posting is disabled (when did that happen?):

Can I just say, this is an AMAZING thread? Thank you all so much for your thoughtful contributions. For the first time in a long time, I am reading each post, avidly. 

They are all wonderful, because these are all things we need to be thinking about.
Bless you!
Chris

 

22 Feb 2013 - 12:30pm
mustefa
2011

I'll keep my 2 cents simple:

Cent #1: I think some forms of early research are just a reflection of the need to satify / greenlight projects. Ryan singer from 37 Signals recently gave an interview talking to this: http://insideintercom.io/an-interview-with-ryan-singer/

How much of the initial research is just validation that we're doing our job?

Cent #2: I take the approach of looking to the data that we currently have relating to our problem/feature (based on data mining, heuristic knowledge and culling from existing behavioral research like quoted above) get conclusions from that research to drive new designs. We create a design, and quickly test it with people around us (paper prototyping, stakeholder input, customer survey) and then push it out to development. While it's in development things could change, more feedback might trickle in and impact our designs.

Ship it out, and iterate.

Conclusion: will this type of research and iteration fly everywhere? It depends on your environment, clients and process.

22 Feb 2013 - 12:54pm
ndhanthro
2013

This *is* a great thread! And such an important topic, glad to see so many people chiming in. Mark posted a similar query to a list that I manage called anthrodesign (about the use of ethnographic methods by multidisciplinary teams). Rather than paraphrasing, here it is again:

I have been meaning to respond to your post since it landed in my inbox. At the risk of sounding overly dramatic, I think what you're touching on may be one of the most important trends of our time. Facebook (and so many other companies today) are using a blend of Lean and Agile principles and practices to inform how they work. Those have significant implications for UX in general, but especially for User Research and maybe even most for ethnographic methods.

At the risk of oversimplifying ... Agile comes from a place where engineers felt that they couldn't deliver what the market needed using the manufacturing-type ('waterfall') build cycle of the past. Agile was a way for engineers to be able to take charge, get more engaged in understanding the requirements and what was expected of them, and to ensure that they had the information and control they needed to deliver the right software to the market. I could go on and on about this, but my blog posts should be live shortly so I'll leave it at that for now! The point is, in the late 1990s and early 2000s, very few organizations had the clarity as we do now about the composition of balanced, multi-disciplinary project teams. Developers had to do their own research, and that had to fit into the short, iterative cycles of how Agile operates.

As UX professionals became integral members of those teams, there are really two schools of thought:

(1) Figure out how to deliver value in a pure Agile model. There are very specific implications for both research and design, which I'll try to elaborate on in my upcoming blog posts.

(2) Ensure that the organization, the hiring manager, the team understand enough about UX that they'll let you work 1-2 Releases or Sprints ahead. Just remember that this is considered Big Up Front Design (BUFD) which is not well accepted by those who want to take a pure Agile approach. Larger companies or 'enterprises' often recognize that kind of larger orchestration is required (for product management, system architecture, etc.), so they may have more of an appetite for this approach.

Each organization you encounter will be at different levels of maturity with Lean/Agile, with UX, with bringing the two together. The challenge is to find the right blend of #1 and #2 based on what they feel works for them. And hopefully as you build more trust with quick wins, you'll get the sponsorship and support to do more of the up front research (which is really the most strategic and effective place for ethnographic methods, I think).

Syndicate content Get the feed