Should I send a survey? Sure, if–

You may have made it past my first and second attempts to convince you a survey is the wrong tool for you.  Because, you know, it’s a valuable tool.  I rail against it because it’s too often used to avoid talking to real humans, or as an inadequate substitute for quantitative data.

On my team we frequently use surveys to fill in a specific kind of knowledge gap. Specifically, quick surveys are great when:

  • You know the who. You’ve targeted the type of person you need to research – preferably by behavior or job title/social identity role as opposed to demographics.*
  • You know the what. You’ve already focused in on specific questions.  You aren’t in exploratory/digging mode – you know what you need to know.
  • You know the why. Your questions have straightforward answers that don’t require explanations or write-in responses.

For example, suppose you’re seeing dwindling usage from people who’ve downloaded your iPhone app.  Many of your App Store reviews complain about your photo-taking feature (which you didn’t realize anyone felt very strongly about).

You could start scheduling interviews – but it’d be faster to start with a survey to see if this photo issue is widespread or just the voice of a vocal minority.

You know the who: people who’ve downloaded your iPhone app.

You know the what: did people see your app as a tool for taking photos? had they tried the feature? how often do they take photos in other apps?

You know the why (or in this case, the ‘why’ might not matter): if people need to take photos and you’ve made the workflow worse, that explains attrition.

You could shoot out a 4-5 question survey to a hundred people and you’d likely have twenty responses within a day.  That’s not a huge sample size — but it’s enough to know whether 2 customers care about the photo feature, or 19 of the 20 respondents care.

If no one seems to care about the photo issue, you’d likely want to do some interviews or some in-person usability testing to figure out what exactly the issue is.

If everyone cares about the photo issue, you could probably start with an internal teardown (most likely, your design team could easily identify issues and start mocking up fixes immediately, vs. waiting for “proof” of what was wrong.

* What’s wrong with demographics?  They’re almost always a bad proxy for something you can measure directly.  Over-65 woman means what, exactly?  Is that a lazy shorthand for “fears technology, living on a fixed income”?  Because I know plenty of iPhone-toting grannies with cash to spare.  If you want to study “people who behave X way”, then look for evidence of people behaving X way.

Should I send a survey? Still no.

Supposing you read my previous post on why you shouldn’t send a survey and those examples didn’t apply to you.

You’re not off the hook yet — there are more ways that people screw this up.

Are you abusing rating scales?

On a scale of 1-5, the likelihood that you will get meaningful data out of rating scales is somewhere between a 1 and a 2:

  • Everyone’s definition of what a ‘2’ or a ‘4’ is, varies wildly*
  • Rating scales skew towards the extremes because the people who adore you AND the people who are pissed off at you will self-select into the population that answers your survey.  People who think you’re “just okay” aren’t motivated to complete your survey anyways.
  • Rating scales skew based on the respondent’s current mood (which is probably driven by plenty of things other than your product)

I like the concept of Net Promoter Score (and certainly, the companies with the highest NPS overlap heavily with companies I am loyal to) but — to be honest, a 10-point rating scale is hiding a multitude of sins.

What you should care about is binary. Are we good — or not? Do we solve your problem — or not?  Does our product make you feel smarter — or not?

If you’re tempted to include questions with a numeric rating scale, or a Excellent-Good-OK-Needs Work-Awful scale, try Yes/No.  Good/Not Good.

Because your customer is going to make a binary decision about your product: to buy — or not.

Worse yet, do you plan to use those rating scales to plot changes over time?

The only thing worse than using overly granular rating scales (“On a scale of 1-10, how is our service?”) is when someone thinks they can repeat the same question and use the delta to accurately track changes (“We were a 5.4 last quarter and this quarter we’re a 6.0, so our service has improved!”)

This is not accurate. This is the worst kind of pseudo-science crap research.

Your service did not get 0.6 better over the past 3 months.  Most likely, your respondents didn’t remember what they rated you last time.  They are guessing when they fill out your second survey 3 months later.

Sure, there’s a possibility that your service jumped from a 3 to an 8.  If that happened — sure, you got better.  But that’s not usually what you’ll see.

There’s another problem here, which is that your goal should generally be to do something with these results.

If your service rating went up (or down), you need to know why.  You need to know which factors that were most important to your respondents.  You need to know the dealbreakers and the nice-to-haves.  And those are bits of information best gotten from a conversation, not a survey.

* If you’ve ridden Uber or Lyft, you know that they prompt you to rate drivers on a 5-point scale.  This is a UX lie: they are actually giving you a binary rating where only 5 equals “good” and 1 through 4 equal “not good”.  (Drivers are required to maintain a 4.5 rating, which effectively means that rating someone anything lower than a 5 is putting their job at risk.)

Anyhow, the funny point here is that one of my friends who sees the world in very rational economic terms, had been rating all of his drivers a ‘3’.  “3 means average! All my drivers were average, there was nothing good or bad about them,” he explained, “and then I got an email from Uber customer support basically asking why I was so unhappy with everyone. So now I rate everyone a 5, which is meaningless.”