The Interview That Explains Everything
"What features would make this better?"
This question has launched a thousand roadmap items that nobody used or cared about. Yes, users know what they need. They know the struggle. They know what looks like.
But designing the solution isn't their job. It's yours. And when you ask a customer to do it for you, they'll try to help — and the answers will be shaped by what they've already seen, what's top of mind, and what they can imagine, which is a tiny fraction of what's possible.
These days, the question often comes dressed up as "What AI features would you want?" or "What should we automate for you?" The AI gold rush may prompt new questions, but the problem remains the same.
interviews work differently. Instead of asking someone to design a solution, you ask them to reconstruct something they already did — a real decision, a real switch, a real moment when the old way stopped working and they went looking for something new. You surface the need. You figure out the solution.
That distinction matters even more with AI because users may ask for a chatbot, a summary, an agent, or an automation when what they really need is confidence, control, speed, relief, a second set of eyes, or a safer way to act. The request is a clue. It is not the answer.
This article shows you how to get started with interviews. Everything else in this category builds on the same foundation: how to ask the right questions, how to listen for the forces that drive real decisions, and how to turn what you hear into insight you can actually use to build, position, and design.
The better you get at this, the less you'll rely on feature requests, satisfaction surveys, and guesswork — and the more your product decisions will be grounded in what people actually do instead of what they say they'd do.
Bad Data In, Bad Data Out
"Would you use this?" gets a polite yes from almost everyone. "What features would you add?" gets a wish list disconnected from real behavior. "How important is X to you?" gets a rating that has no relationship to whether they'd actually switch products to get it.
The Segway is the classic example. Market research said people wanted a revolutionary personal transportation device. The research was based on what people said they'd do.
What they actually did was look at the price, feel embarrassed to ride one in public, and keep driving. Every signal from hypothetical research pointed to massive demand. Every signal from actual behavior pointed to niche adoption.
The problem isn't that people lie . The problem is that the interview format invites speculation instead of reconstruction. And speculation is a terrible basis for product decisions.
What Makes a JTBD Interview Different
A interview doesn't ask people to imagine. It asks them to remember.
The subject isn't the product. The subject is a decision the person already made — a moment when they switched from one way of doing things to another. Bought a product. Adopted a tool. Hired a service. Changed a behavior.
The interview reconstructs the timeline of that decision: what was happening before, what triggered the search, what they tried, what forces were pulling them toward the new thing, what forces were holding them back, and what finally made them commit.
This works because past behavior is the best predictor of future behavior. When you understand why someone switched — the specific situation, the specific trigger, the specific forces — you understand the pattern that will produce the next switch. You're not guessing at what people might want. You're studying what people actually did, and building from there.
A standard interview starts with the product. "What do you think of our dashboard?" "How do you use the reporting feature?" "What would you change about the onboarding?" Every question anchors the conversation to your product's structure.
A interview starts with the situation. "What were you doing before you found us?" "Walk me through the week you decided to look for something new." "What was ?" The product might not come up for twenty minutes — because the interview is about the user's life, not your interface.
A standard interview asks for preferences. "Do you prefer lists or boards?" "Would you want a mobile app?" "How important is real-time collaboration?" Preferences describe what someone thinks they like. They don't describe what someone will do under pressure.
A interview asks about the forces of change. What pushed you away from the old way? What pulled you toward the new thing? What habit almost kept you from switching? What anxiety almost stopped you? These four forces — , pull, habit, anxiety — explain the mechanics of every adoption decision. And they're invisible in a standard interview because nobody asks about them.
A standard interview produces feature requests. The output is a list of things people said they want, prioritized by how many people mentioned them.
A interview produces a causal story. The output is a narrative of why someone acted — what situation they were in, what made the status quo intolerable, what they were trying to make, and what the product needed to deliver to earn the switch. That narrative tells you what to build, how to message it, how to onboard, and what anxieties to address — because it's grounded in real behavior, not stated preferences.
What You're Actually Listening For
In a standard interview, you listen for feedback. What do they like? What don't they like? What do they want?
In a interview, you listen for the forces that produced the decision.
: what was getting worse about the old way? Not "it was frustrating" — that's too vague and it's been true for years without producing action. What specifically happened that made the status quo unbearable this week? A missed deadline. A visible mistake in front of a client. A new boss who expected more. A moment where the failed publicly.
Pull: what made the new solution feel like ? Not "it had better features" — that's a post-hoc rationalization. What did they see, hear, or experience that made a better future feel reachable? A colleague's recommendation. A demo that made them think "wait, it can just do that?" A screenshot that matched what they'd been trying to build manually.
Habit: what almost kept them from switching? Not "resistance to change" — that's a persona platitude. What specifically was comfortable enough about the old way that they almost didn't bother? Their templates. Their muscle memory. The fact that everyone on the team already knew how to use it. The that lived there and felt risky to move.
Anxiety: what almost stopped them? Not "concerns about ROI." What specifically scared them? That the migration would break something. That they'd look foolish for championing a switch that didn't work. That the new tool would turn out to be harder than what they had. That they'd invest the effort and end up back where they started.
These four forces are the mechanics of every switching decision. They're present in every adoption, every churn event, every upgrade, every expansion. And they're invisible unless you specifically ask about them — not by name, but through the timeline of what actually happened.
This Changes What You Learn
A standard interview with ten users produces a list of feature requests, a collection of opinions about the UI, and a general sense of satisfaction or dissatisfaction. It tells you what people think about your product.
A interview with ten users produces something different: a pattern of situations, triggers, forces, and definitions of . It tells you why people act.
And "why people act" is what determines your messaging (what should we speak to?), your onboarding (what first win do we need to deliver before the motivation fades?), your roadmap (what are we enabling and where does the experience fail to deliver it?), and your competitive strategy (what are we really replacing — and what force is keeping the incumbent hired?).
The lens sharpens this further. Once you understand , the forces, and the switching story, you can evaluate whether the experience you've designed actually serves what you've learned.
Does the product speak the language — or does it speak the product's own internal language? Does it show toward the outcome — or leave the user guessing? Does it reduce anxiety at the moments anxiety peaks — or create new uncertainty? Does it match how naturally unfolds — or impose a structure the user has to fight through?
The interview tells you what the user needs. The heuristics tell you whether the experience delivers it. Together, they close the loop between research and design.
The Tradeoff
interviews tend to take longer than standard interviews because reconstructing a real decision takes time. You can't rush someone through the timeline of a switch in fifteen minutes. Thirty to sixty minutes is typical. And you need to resist the urge to steer toward your product.
They also produce findings that are harder to act on than feature requests. "Users want bulk export" is easy to put on a roadmap. "Users are switching because the experience doesn't produce confidence at the moment they need to present the output to leadership" requires more thinking, more design judgment, and more willingness to reconsider what the product is actually for.
But the tradeoff is worth it. Ten good interviews will tell you more about what to build and why than a hundred satisfaction surveys. They tell you the thing surveys can't: why people actually do what they do.
And that's the only research that can predict what they'll do next.