Should I send a survey? Still no.

Supposing you read my previous post on why you shouldn’t send a survey and those examples didn’t apply to you.

You’re not off the hook yet — there are more ways that people screw this up.

Are you abusing rating scales?

On a scale of 1-5, the likelihood that you will get meaningful data out of rating scales is somewhere between a 1 and a 2:

  • Everyone’s definition of what a ‘2’ or a ‘4’ is, varies wildly*
  • Rating scales skew towards the extremes because the people who adore you AND the people who are pissed off at you will self-select into the population that answers your survey.  People who think you’re “just okay” aren’t motivated to complete your survey anyways.
  • Rating scales skew based on the respondent’s current mood (which is probably driven by plenty of things other than your product)

I like the concept of Net Promoter Score (and certainly, the companies with the highest NPS overlap heavily with companies I am loyal to) but — to be honest, a 10-point rating scale is hiding a multitude of sins.

What you should care about is binary. Are we good — or not? Do we solve your problem — or not?  Does our product make you feel smarter — or not?

If you’re tempted to include questions with a numeric rating scale, or a Excellent-Good-OK-Needs Work-Awful scale, try Yes/No.  Good/Not Good.

Because your customer is going to make a binary decision about your product: to buy — or not.

Worse yet, do you plan to use those rating scales to plot changes over time?

The only thing worse than using overly granular rating scales (“On a scale of 1-10, how is our service?”) is when someone thinks they can repeat the same question and use the delta to accurately track changes (“We were a 5.4 last quarter and this quarter we’re a 6.0, so our service has improved!”)

This is not accurate. This is the worst kind of pseudo-science crap research.

Your service did not get 0.6 better over the past 3 months.  Most likely, your respondents didn’t remember what they rated you last time.  They are guessing when they fill out your second survey 3 months later.

Sure, there’s a possibility that your service jumped from a 3 to an 8.  If that happened — sure, you got better.  But that’s not usually what you’ll see.

There’s another problem here, which is that your goal should generally be to do something with these results.

If your service rating went up (or down), you need to know why.  You need to know which factors that were most important to your respondents.  You need to know the dealbreakers and the nice-to-haves.  And those are bits of information best gotten from a conversation, not a survey.

* If you’ve ridden Uber or Lyft, you know that they prompt you to rate drivers on a 5-point scale.  This is a UX lie: they are actually giving you a binary rating where only 5 equals “good” and 1 through 4 equal “not good”.  (Drivers are required to maintain a 4.5 rating, which effectively means that rating someone anything lower than a 5 is putting their job at risk.)

Anyhow, the funny point here is that one of my friends who sees the world in very rational economic terms, had been rating all of his drivers a ‘3’.  “3 means average! All my drivers were average, there was nothing good or bad about them,” he explained, “and then I got an email from Uber customer support basically asking why I was so unhappy with everyone. So now I rate everyone a 5, which is meaningless.”

Should I send a survey? No.

My default answer to this question is “no.”

One might be surprised, then, to find that my team at Yammer uses surveys all the time for quick research. They can be incredibly lightweight, fast, and useful, if used appropriately.

That’s a big IF. Most people use surveys to avoid talking to humans, to attempt to prematurely ‘scale’ research, or as a (very poor) substitute for data analytics.

If you’re thinking “can I send a survey to learn this…?”, here are some questions to ask yourself.

Do you know the questions you need to ask?

This may sound silly, but much of customer research and customer development involves learning more about current behaviors and procedures.  It is usually not clear what you need to ask until after you’ve started listening to someone answer your first few questions.

For example, you could ask me “What is your purchasing limit at work?” and I could answer you, correctly, that I can purchase something up to X thousand dollars.

But it’s also true that my team has a larger budget for research tools/services/incentives.  And that travel rolls up to an org-wide budget, and because I frequently travel up to Redmond to give internal workshops, I have ‘spent’ more on travel than that purchasing limit.

Not to mention that some people have different limits for annual purchases vs. one-off purchases, or have different limits for what they can put on a corporate credit card vs. what they can get a PO approved for. There are literally dozens of potential questions rolled up in that one.

None of which you’d have learned by asking me a survey question.  You’d have been better off scheduling a 10-20 minute interview.

On the other hand, suppose you’re doing some introductory customer development around kid’s toys.  You might learn enough from a straightforward question like “In the past month, how much have you spent on toys (for your kids or others)?” to determine whether or not to continue. (assuming you didn’t ask that question in December, which likely have given you skewed data)

Do you know the probable answers to the questions you need to ask?

Again, this may sound silly — why would you need to ask anything if you already knew the answer?!

Let’s say you want to know what tools people use for project management.  So you list options for all the project management tools you know of. Maybe you even apply some rigor and choose them based on number of downloads or some analyst report.

Except some percentage of respondents use a tool only very lightly because 90% of the time they track things on a whiteboard or via email or even pencil-and-paper. So they’ll choose “Pivotal Tracker” because they do one task in it, even though they wouldn’t really miss it if it were gone.

OR some respondents will not actually know what tool they use. (It is amazingly common how many people do not know the name of a piece of software but only recognize it by the icon they double-click on.) But they will think, “well, I see all these options, and one of them must be right”, and they will GUESS and odds are, they will guess INCORRECTLY.

You may think that a freeform text field will solve these problems (after all, people can just leave them blank if they don’t know! People can specify that they use X tool in this situation and Y tool in that one!), but they really don’t.  People skip freeform text boxes when they can; if you make them required fields, you’ll get a ton of gibberish data.

For all of these cases, you’d more likely learn what you needed through a back-and-forth conversation.  You’d be able to learn which tools for which situations, why they switched away from that tool last year, who makes the decisions on tool usage, that the respondent gets assigned tasks but never actually opens the program himself.

Use these 2 criteria — do you know the questions you need to ask, and do you know the probable universe of answers — and you’ll probably kill more than half of your surveys before you can pop open a SurveyMonkey tab.