Should I send a survey? Still no.

Supposing you read my previous post on why you shouldn’t send a survey and those examples didn’t apply to you.

You’re not off the hook yet — there are more ways that people screw this up.

Are you abusing rating scales?

On a scale of 1-5, the likelihood that you will get meaningful data out of rating scales is somewhere between a 1 and a 2:

  • Everyone’s definition of what a ‘2’ or a ‘4’ is, varies wildly*
  • Rating scales skew towards the extremes because the people who adore you AND the people who are pissed off at you will self-select into the population that answers your survey. People who think you’re “just okay” aren’t motivated to complete your survey anyways.
  • Rating scales skew based on the respondent’s current mood (which is probably driven by plenty of things other than your product)

I like the concept of Net Promoter Score (and certainly, the companies with the highest NPS overlap heavily with companies I am loyal to) but — to be honest, a 10-point rating scale is hiding a multitude of sins.

What you should care about is binary. Are we good — or not? Do we solve your problem — or not? Does our product make you feel smarter — or not?

If you’re tempted to include questions with a numeric rating scale, or a Excellent-Good-OK-Needs Work-Awful scale, try Yes/No. Good/Not Good.

Because your customer is going to make a binary decision about your product: to buy — or not.

Worse yet, do you plan to use those rating scales to plot changes over time?

The only thing worse than using overly granular rating scales (“On a scale of 1-10, how is our service?”) is when someone thinks they can repeat the same question and use the delta to accurately track changes (“We were a 5.4 last quarter and this quarter we’re a 6.0, so our service has improved!”)

This is not accurate. This is the worst kind of pseudo-science crap research.

Your service did not get 0.6 better over the past 3 months. Most likely, your respondents didn’t remember what they rated you last time. They are guessing when they fill out your second survey 3 months later.

Sure, there’s a possibility that your service jumped from a 3 to an 8. If that happened — sure, you got better. But that’s not usually what you’ll see.

There’s another problem here, which is that your goal should generally be to do something with these results.

If your service rating went up (or down), you need to know why. You need to know which factors that were most important to your respondents. You need to know the dealbreakers and the nice-to-haves. And those are bits of information best gotten from a conversation, not a survey.

* If you’ve ridden Uber or Lyft, you know that they prompt you to rate drivers on a 5-point scale. This is a UX lie: they are actually giving you a binary rating where only 5 equals “good” and 1 through 4 equal “not good”. (Drivers are required to maintain a 4.5 rating, which effectively means that rating someone anything lower than a 5 is putting their job at risk.)

Anyhow, the funny point here is that one of my friends who sees the world in very rational economic terms, had been rating all of his drivers a ‘3’. “3 means average! All my drivers were average, there was nothing good or bad about them,” he explained,“and then I got an email from Uber customer support basically asking why I was so unhappy with everyone. So now I rate everyone a 5, which is meaningless.”