Quick Method for Testing Copy Variations

When you’re coming up with the copy for a critically important element in your product — i.e. the “Sign Up” or “Buy Now” buttons — it’s obviously worth the effort to run a split test. After all, if for some reason “Complete Your Purchase” outperforms “Buy Now”, you need to know that or risk leaving dollars on the table.

It’s less clear-cut for actions, buttons, links, and navigational elements that are not mission-critical. You still want some confidence that you’ve chosen clear language, but probably don’t have the time and resources to run a split test (or a task-based usability test) for each of these instances.

Here’s the quick and dirty method we’ve been using with a tool called Usaura:

Come up with 2-3 alternative phrases, each of which seems breathtakingly obvious/logical to at least one product manager, engineer, designer, or writer. (You’re probably just as good at that part as we are.)

i.e. Untrack in Inbox or Stop Following in Inbox

Find a longer way to describe the action that will happen when someone clicks on that button or chooses that navigation option. This can be as long as two sentences.

You’ll use that same text for the first screen of each test variation:

For each alternative, create a quick mockup of the actual interface where the only difference is the button label (the same sort of assets you’d create if you were going to run an A/B or A/B/C test).

Get each test variation in front of at least 10, preferably 30-40 people. Usaura measures the speed of people clicking and shows you a heatmap. Here’s what a bad test result looks like:

You’ll notice there is no clear clustering of results — suggesting that people didn’t see ANY button label that looked like it would do what they needed.

In a better test, we’d see more than 50% of clicks settle on our defined “success” target. The time (here showing 22s) is often faster as well, suggesting that people were able to quickly skim and aim their mouse at the right target.

Precise and scientific? Nope. But these tests are fast — we can easily get enough people do the 30-seconds-long task to get results within the same business day — and they are a great lightweight way to settle opinion debates over copy. They can also reveal when ALL alternatives are bad — i.e. no test performs very well at all — which helps convince people to throw out all the bad options and start over.