Better Product Managers, and Product Management

“What is the most important thing you could be working on right now?”

and

“What’s keeping you from working on that?”

When you ask these questions of people on your team, there’s basically a 2×2 matrix of possible answers:

The green square is amazing.  It’s where you assume the people on your team are working, all the time, because you’re such an awesome manager that you perfectly communicate all of your priorities all the time and help them clear the obstacles out of their way so they can breeze about their jobs.

But probably, if you actually ask, you’ll find that the people on your team are somewhere in the orange square.  This is manageable, though not permanently, because at least they’re confident that they’re working on the right things.  When you ask people, “What’s the most important thing you could be working on?” and you agree with their assessment, that’s a good sign.

But a lot of people won’t necessarily volunteer the next part – they won’t admit that they’re blocked.  Or they may feel like it’s their responsibility to get UN-blocked, that asking their manager for help is a sign of weakness or lack of capability.   So you have to ask this explicitly.  Sometimes it’s money (i.e. for software, tools, contractors, etc.) — which is usually trivial compared to the cost of an-only-partially-utilized-employee. Sometimes it’s something where I can help them force a decision.  Sometimes it is something they’ll need to work through on their own, but at least I can give them the acknowledgement that they’re headed in the right direction.

The red square is basically an F grade for you as a manager.  You’ve got someone doing pointless things, poorly.

But the trickiest of all is the blue square, because people slide into this easily and often accidentally.  And then, they may not really realize that they’re working on pointless tasks.  Getting stuff done is usually rewarded.  It has the intrinsic rewards of making us feel smart.  And as managers, if the people on your team aren’t complaining, it’s easy to think “if it ain’t broke, don’t fix it.”

I can’t claim that everyone is in the green square right now, but asking these questions and listening and making changes, is helping us progress towards up and to the right.

Popularity: 5% [?]

Stop me if you’ve heard this one.

Eight people walk into a room, over the course of three or four hours, and ask a series of usually predictable, often repetitive, and possibly useless questions while the applicant on the other side of the table tries to decide whether they should answer honestly or lie like crazy to make themselves sound good.  And somehow at the end of this, we expect to know whether the applicant is someone we should hire or not!

I used to hate interviewing people.  I never felt prepared, I always felt like I was scrambling for ‘good questions’, and worse yet — I usually didn’t come out of an interview feeling confident about a hiring decision.  Don’t get me wrong, I’ve gotten some amazing direct reports and coworkers from my past interviews.  But I always used to feel like that was mostly dumb luck.

At some point in a past job I did an “interview training” workshop.  Aside from covering the obvious “don’t ask questions that are illegal”, it was useless in helping me become a better interviewer.  It was valuable in one way, though — it made me realize that interviewing is not a skill that makes sense to learn in a vacuum.

We’re still iterating on our interview process, but it’s been working pretty effectively lately.  And that’s because we’ve left almost nothing to chance.  My team, as a team, did a ton of brainstorming and prioritizing and explicitly looking at what worked and didn’t.

The steps that have lead up to good, effective interview slates looked something like this:

  • If we’d already hired the perfect person for this job role, what would they be like? (technical skills, soft skills, perspectives looking at the world, ability to work well within our lightweight/fast processes , ability to round out the team)
  • Prioritize a list of “things we need to learn about this specific candidate” and write it out explicitly in a shared note
  • Work backwards to figure out how we’re going to learn that stuff
  • Explicitly divide up questions, exercises, and interview styles between interviewers
  • Towards the end, compare “what we know” against “what we hoped to learn about this candidate” so the final interviewer can tailor their session to getting those answers
  • Afterwards, discuss how effective the questions and exercises were – and adapt to make sure we’re giving candidates the best chance at shining

Each interview is different by necessity.  We put together a new shared note for each candidate so we have a clear list of what we’re out to learn and a single place to consolidate feedback throughout the day.

Even with explicitly dividing up questions, we still end up repeating the same question sometimes.  But it feels much, much closer to assessing a candidate on the right criteria vs. googling interview questions and hoping for dumb luck.

Popularity: 4% [?]

I go a little bit crazy when I hear those words.

Why?  Because they’re usually the start of a conversation that goes something like this:

…”Why can’t they just [use the mobile website instead of an app]?”

…”Why can’t they just [change their font size manually]?”

…”Why can’t they just [open up a second browser window]?”

…”Why can’t they just [try that button and FIND OUT what it does]?”

In other words, it’s expecting people to behave rationally.  It’s expecting people to actively seek to understand how a technology works, versus simply hiring it to get a job done.  And that’s just not the way people function.

We take shortcuts.  We’re influenced by our environments.  We have irrational, emotional reactions.

(Earlier this week, in bed with a bad cold but trying to work nonetheless, I almost cried when I completed the first task I put into Trello and there was no satisfying user interaction – nothing glowed, nothing x’d itself out, no confirmation message.  I wanted affirmation, dammit.)

You see, there are two philosophies when it comes to building software.

“If we build in more instructions and offer training and shame people for doing the ‘wrong’ things, they’ll behave the way we want”

or

“Let’s accept that people will not take the extra step, no matter how simple or rational it seems.  Accept that as first principles, and try to build an amazing experience in spite of it.”

There are still a lot of people stuck in the first philosophy, but it’s not working out so well. Choice by choice, download by download, people are flocking to the tools built by people who live the second philosophy.

OK, OK, I have to share the absolute worst one of these I’ve ever seen:

“I can’t believe all these people complaining that our app doesn’t have turn-by-turn directions!  Seriously?  Why can’t they just look up all the directions in advance and memorize them?  How often do people really drive anywhere that has more than 10 direction steps?   Or how often are they driving without a passenger in the car who could help them read off directions?”

I wish I were making that up.

Popularity: 3% [?]

When you’re coming up with the copy for a critically important element in your product – i.e. the “Sign Up” or “Buy Now” buttons – it’s obviously worth the effort to run a split test.  After all, if for some reason “Complete Your Purchase” outperforms “Buy Now”, you need to know that or risk leaving dollars on the table.

It’s less clear-cut for actions, buttons, links, and navigational elements that are not mission-critical.  You still want some confidence that you’ve chosen clear language, but probably don’t have the time and resources to run a split test (or a task-based usability test) for each of these instances.

Here’s the quick and dirty method we’ve been using with a tool called Usaura:

Come up with 2-3 alternative phrases, each of which seems breathtakingly obvious/logical to at least one product manager, engineer, designer, or writer.  (You’re probably just as good at that part as we are.)

i.e. Untrack in Inbox or Stop Following in Inbox

Find a longer way to describe the action that will happen when someone clicks on that button or chooses that navigation option.   This can be as long as two sentences.

You’ll use that same text for the first screen of each test variation:

For each alternative, create a quick mockup of the actual interface where the only difference is the button label (the same sort of assets you’d create if you were going to run an A/B or A/B/C test).

Get each test variation in front of at least 10, preferably 30-40 people.   Usaura measures the speed of people clicking and shows you a heatmap.  Here’s what a bad test result looks like:

You’ll notice there is no clear clustering of results – suggesting that people didn’t see ANY button label that looked like it would do what they needed.

In a better test, we’d see more than 50% of clicks settle on our defined “success” target.  The time (here showing 22s) is often faster as well, suggesting that people were able to quickly skim and aim their mouse at the right target.

Precise and scientific?  Nope.  But these tests are fast – we can easily get enough people do the 30-seconds-long task to get results within the same business day – and they are a great lightweight way to settle opinion debates over copy.  They can also reveal when ALL alternatives are bad – i.e. no test performs very well at all – which helps convince people to throw out all the bad options and start over.

Popularity: 4% [?]