The Improved New User Experience That Wasn’t

Here’s the humbling thing that you’ve got to keep in mind as a user researcher and user experience designer.

Qualitative data is helpful, and your years of experience are helpful, and your understanding of design principles are helpful, but they will always be trumped by quantitative data on how people are actually behaving.

I have seen many times where qualitative feedback from users, or our strong intuition as excellent product managers and designers, has led us down a path that was wrong.

Here’s a recent example from Yammer:  our signup process: we recently tested a simplified version with 2 steps vs. the original 4-step process.  I was eager to see this change launched, because I hated our signup process.

And by every usability heuristic measures, the 2-step version was superior. It was faster, more approachable, got people into the app faster.  The design was more polished and we’d had an actual copywriter crafting the words.

It performed extremely well in user testing and customers gave us highly positive feedback.

But in actual real-world usage, it was actually a bust.

When we A/B tested it, we found that we achieved only a small increase in successful signup completions — and that was overwhelmed by a larger decrease in retention.

People who went through the shorter signup were *less likely* to actually return and use our service.

And this is where you need not only data but organizational support for data- and behavior-driven development.   Without it, some exec would’ve swooped and demanded that we release the change anyways.

Probably would’ve suggested that “your data must be flawed” or “maybe these were just unusually dumb users” or “well, this contradicts with my (unarticulated) vision, so forget the test results”.   (I’ve heard all of these before.)  Luckily, we have the discipline to listen to what our customers are telling us through their actions: the experiment was rolled back, to be iterated upon another day.

10 thoughts on “The Improved New User Experience That Wasn’t

  1. This is true. But what happens far more often is that the qualitative research is utterly neglected throughout the process. And then there are the myriad cases where it turns out you’re asking the wrong question. I once tested what by all signs and signals was a brilliant improvement to the interface for a purchasing app. Until one of the testers said to me: “Of course, it doesn’t make any difference. This isn’t how you really do purchasing” and proceeded to outline a highly personalized method for vendor choice and the fact that as far as purchasers were concerned this kind of app was something they were required to deal with later. Sunk. We were designing a superior interface for something people didn’t want to use at all. Were you?

  2. Pretty much all of the elements of the redesign came from direct customer feedback as well as watching people use our prior signup flow, which is why this was particularly amazing.

    The thing is, people are very bad at weighing how significant factors are in making their decisions.   A site can have hideous design, but be so useful that you end up using it anyways.  Or a well-designed app can have some subtle element that undermines your trust and so you abandon it even though on paper it met all your needs.

    The elements of the old signup flow that people found frustrating are still frustrating.  But, as it turns out, those elements weren’t frustrating *enough* to prevent usage, and in fact, possibly even made people feel more invested in our app. (someone tweeted me this link today in response –

  3. Excellent, humbling article. Interesting that getting users “invested” in the product with a longer sign-up process leads to loyal users.  (BTW I looked at your sign-up form, and while it does require a commitment — I liked how it showed where I was going next.)

    I think this article highlights the importance of measuring user retention.  Not just the sign-up rate (which technically was increased by the shorter form). Many startups these days push to go “viral,” and do not consider long term trust. 

    I agree with Katie that qualitative research leads to discovery about what is or is not the problem for users.  I think metrics are good for discovering problem areas, too, which usability testing can provide insight into “why” they are the problem.  Every design is a hypothesis.

    It’s true that the users often do not make the best design suggestions, and do not always recognize the defining factor into what actually influences their decision.

    There is a great article I read last year about a gaming company — the users were frustrated by trolls intermittently appearing in the safe ground (not sure the exact terms here..) and wanted them gone.  Instead of eliminating the trolls they doubled down on the trolls — and made it a “warzone.”  Need to find it!!

    But user studies reveal perception (confusion, trust, frustration) and how
    they go about the job (which can lead to rethinking features.)  So I
    agree that both qualitative studies and A/B testing are important!

  4. “we achieved only a small increase in successful signup completions — and that was overwhelmed by a larger decrease in retention” … Please help me understand this a little better … perhaps with data about how small is the small increase and how large is the larger decrease? Because my first reaction was that that is intuitive. If it is easier/faster to sign up then I would expect more people to sign up since it would also include half-motivated (“just curious”) people who are more likely to not return later on. Obviously that would bring down retention rate due to larger denominator. No?

  5. I don’t understand why you say it was a bust?

    Was your goal not to improve user experience of the sign up process, make it quicker, easy to understand and improve the number of sign ups to the service?

    It sounds to me that you achieved that?

    Sign Up and Retention are two completely different issues. You can’t expect Retention to also be high just because you improved the sign up process.

    I do agree that those that invest more time in signing up are more likely to hang around, but that doesn’t mean the process was a good experience for them either. It also doesn’t mean that the service behind the sign up process was good enough to keep those engaged.

    With a simple sign up process comes more interest. If it takes me 1 minute to do something rather than 5, I’ll be more inclined to give it a try. After that the service needs to engage me so that I’ll stick around and keep using the service. 

    It sounds to me that you have just created more work for yourself as you now need to find out why those “new users” are not engaging with your service and work on a new split test to find out why people aren’t engaging with your product….

  6. This is exactly WHY people should do A/B testing – to ensure that improving one metric doesn’t come at the cost of other important metrics.

    You are asserting that “with a simple signup process comes more interest”; but you are basing that on a sample size of one: yourself.  When we look at big samples of data, we see that things we “know” to be true … often aren’t.  

    I completely agree that a service needs to be engaging in order to encourage people to stick around.  And that’s a separate problem that we need to tackle.  But “why people do X” (or don’t do X) is not a problem for a new split test — you can’t split test your way to hypotheses.  That will require more qualitative research to get to a good hypothesis FIRST, then we build it out and validate it.

  7. Which is exactly why I don’t understand why you think it “was bust”. You’re using the wrong metrics to come to this conclusion.

    I think we actually agree with each other, but the article and it’s title makes it out that because you had a larger decrease in retention that it didn’t improve the user experience in the first place, which this doesn’t actually tell us.

    Oh and just to note – I’m not basing that my opinion on “a sample size of one” – but many years of experience and running split tests on numerous web properties and applications….

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>