JTBD Adoption Fails When It Only Lives in Research

You know how this goes.

The team runs a round of switch interviews. They synthesize the findings. They produce a beautiful deck — job statements, forces diagrams, struggling moments captured with real user quotes. It gets presented to the team. People nod. It gets shared in Slack. It gets referenced in a couple of conversations over the next two weeks.

Then it disappears behind sprint tickets and release dates.

The roadmap doesn't change. The design reviews don't change. The sprint planning conversations don't change. The prioritization criteria don't change. The product ships the same features it was going to ship before the research happened — just with new vocabulary attached to the same decisions.

The research was probably good. This is an adoption failure: the team learned something true about why users show up and what they're trying to make, and then kept making feature-first decisions anyway.

This is the single biggest organizational failure mode for . Not bad research. Not wrong job statements. Insight decay — where understanding stays trapped in the research artifact and never reaches the rituals where decisions actually get made.

Why good research decays

A deck can't argue with a stakeholder. A document can't back on scope. A job statement can't compete with "we need to ship."

That's the core problem. Research produces insight. Insight lives in a document. The document has no power in the rooms where decisions happen — sprint planning, roadmap reviews, design critiques, prioritization discussions. Those rooms have their own language, their own criteria, their own momentum. And unless the research changes what happens in those rooms, it's content, not work.

The decay usually follows a predictable pattern. The research team produces findings. The findings get translated into high-level themes: "users need confidence," "we should reduce friction," "clarity is important."

Those themes are true but not actionable — they're too abstract to tell anyone what to change, what to build, or what to cut. So the themes become wallpaper. They show up in decks. They get nodded at. They don't change what ships.

The team has "done ." The product hasn't changed.

The places research needs to reach

If job insight is going to survive contact with real product work, it has to show up in specific places — not as a reference document people can consult, but as a set of inputs that change how those places operate.

Roadmap conversations. A roadmap item that can't explain which job step it serves and what it enables hasn't earned its place. Job insight should be the filter that determines what enters the roadmap, not the label that gets attached after the roadmap is set.

Sprint planning. The sprint goal should describe what job the team is targeting, not what feature they're shipping. "Reduce abandonment at first use by removing the setup tax" is a job-informed sprint goal. "Complete onboarding redesign" is an output goal that job insight didn't touch.

Design reviews. The first question in a design review should be "what is the user trying to accomplish in this moment, and does this screen help?" — not "does this look right?" If never enters the room, the review defaults to taste.

Prioritization discussions. When the team is choosing between competing ideas, the tiebreaker should be "which one unblocks on fastest?" or "which one reduces the biggest anxiety?" — not "which one is easiest to ship?" or "which stakeholder is pushing hardest?"

Onboarding decisions. Job insight tells you what the user's first moment of should feel like — and whether your onboarding delivers it or just runs people through configuration. If onboarding is designed without reference to , it's optimizing setup completion instead of toward the outcome the user hired the product for.

Messaging and positioning. If the homepage describes the product instead of the the buyer is living, insight never reached marketing. Positioning that works starts with . Product description starts with the feature list.

CS and feedback loops. Support teams hear every day — in how customers describe their problems, in what they're trying to accomplish, in what workarounds they've built. If that signal isn't flowing back into product decisions, you have a feedback loop that's broken at the most valuable point.

When job insight reaches all of those places, the team makes different decisions. When it stays in research, the team makes the same decisions with better vocabulary.

It’s not your fault

It's tempting to blame the product team for ignoring research, or the research team for not being persuasive enough. Usually neither is the problem.

The problem is structural. Research produces a deliverable — a deck, a document, a readout. That deliverable gets consumed in one moment (the presentation) and then competes for attention against everything else the team is dealing with. Sprint commitments. Stakeholder requests. Bug reports. Competitive pressure. Deadlines.

The deck doesn't have a seat at the table. It was invited once, presented its case, and left. Every decision after that is made without it — not because anyone disagrees with the findings, but because the findings aren't embedded in the decision-making rituals.

That's the gap. The team knows . The rituals don't reflect it.

Enter JTBDUX

This is arguably the strongest case for — which brings and UX together into a shared decision language that works across roles.

Plain is good at explaining why someone shows up. It captures the , the forces, the desired . But that insight is fragile. It lives in research language and research formats. It doesn't naturally translate into the language of sprint planning, design critique, or engineering tradeoffs.

is the operational layer that makes job insight reusable. It provides a set of questions that work in everyday decision-making — not just in research synthesis. Does the experience speak the language ? Does it match the user's mental model of how the work should flow? Does it reduce anxiety at the moment anxiety peaks? Does it show in terms the user would recognize?

Those questions aren't research questions. They're sprint planning questions. Design review questions. Acceptance criteria questions. They're what prevents the deck from becoming wallpaper — because they give every role on the team a way to use the insight in the room where their decisions happen.

When a designer asks "what anxiety is peaking at this moment in the flow?" — that's job insight surviving contact with real design work. When a PM asks "does this roadmap item help users make on , or does it just add a capability?" — that's job insight surviving contact with prioritization.

When an engineer asks "is this a high-stakes moment where we need graceful error handling, or a low-stakes one where a generic message is fine?" — that's job insight surviving contact with engineering tradeoffs.

doesn't prevent insight decay by producing better documents. It prevents decay by giving the insight somewhere to live after the deck is closed — in the questions the team asks every day.

Is it actually working?

If you want to know whether your research has survived contact with your organization, ask one question: did the research change any decisions that would have gone differently without it? Not "did the team reference it?" Not "did it show up in a deck?" Not "did people say '' in a meeting?"

Did it change what got built, what got cut, how a design was evaluated, or how a sprint was planned?

If the answer is yes, the lens is alive. If the answer is no — if the roadmap looks the same as it did before the research happened, just with job labels attached, you have insight decay. And the fix isn't more research. It's changing the rituals where the insights needs to show up.

Was this page helpful?