Closing the Gap Between JTBD Theory and Action

You think you’ve adopted Jobs-to-Be-Done because you ran a workshop, created some job stories, or started using the language. Someone on the team read "Competing Against Luck." The slide deck now has a four forces diagram. People say "job" in meetings instead of "use case."

I hate to break it to you, but that's awareness, not adoption.

Awareness is knowing the vocabulary. Adoption is when the vocabulary changes what happens. And the gulf between those two things is where most efforts go to die because it never survived contact with the rituals where meaningful decisions actually get made.

And when you're trying to stay ahead in the AI gold rush, this can have dire consequences for your product’s retention. You can sound current, move fast, and still ship generic AI slop if the new language never changes how you evaluate user intent, trust, risk, or .

This is the question that matters: has changed how your team makes decisions? Not how they talk. Not what they put on slides. How they decide what to build, what to cut, how to evaluate design work, how to use research, how to frame tradeoffs, how to decide what should be automated, and how to kill weaker ideas faster.

If the answer is no, you don't have an adoption problem. You have an operationalization problem. And that's a different kind of problem — one that workshops can't solve.

That's what this category is about. Not teaching — you can learn the concepts in an afternoon. The hard part is making change what happens on Monday morning: how sprints get planned, how designs get reviewed, how research gets used, how engineering frames tradeoffs, how weaker ideas get killed faster.

Every article in this category helps you turn from a theoretical framework into a lens so you can build and ship products.

The gap between learning JTBD and using it

is unusually easy to learn and unusually hard to operationalize.

The concepts are intuitive. People hire products to make . They switch when the current way becomes intolerable and the alternative becomes credible. There are forces — with the old way, attraction to the new way, comfort of habit, anxiety about change. Understanding means understanding the situation, the stakes, and the the user is trying to make.

Most smart people can grasp this in an afternoon. That's the appeal. It's also the trap — because understandingJTBD creates a feeling of adoption that isn't real yet.

Real adoption isn't a knowledge state. It's a behavioral change. And behavioral change requires the lens to show up in the places where the team actually works: sprint planning, design reviews, research handoffs, engineering discussions, roadmap debates. If it only shows up in workshops and slide decks, it's a theory the team learned, not a lens the team uses.

What real adoption actually looks like

You can tell whether a team has actually adopted by listening to what happens in their regular meetings — not the special workshops, but the Tuesday standup, the Thursday design review, the sprint planning session, the roadmap discussion.

Planning conversations shift from outputs to . Instead of "what are we shipping this sprint?" the team asks "what job step are we helping with? What friction are we reducing? What trust gap are we closing?" Features still get built, but they enter the sprint because they serve a specific moment in a specific job — not because they were next on the backlog.

For AI products, this changes the question from "what can we automate?" to "what are we helping the user make, and what role should AI play in that moment?" Sometimes the answer is automation. Sometimes it's augmentation. Sometimes it's explanation, reversibility, a better default, or no AI at all.

Design reviews become more job-centric. Instead of "does this look right?" the first question is "what is the user trying to accomplish in this moment, and does this screen help?" The feedback changes. "It feels cluttered" becomes "there's information here that doesn't matter for this job at this moment, and it's competing with the information that does."

Research escapes research. The team stops treating research as a deliverable — a deck that gets presented and filed. Job insights show up in sprint goals, design briefs, and acceptance criteria. The PM references the when prioritizing. The designer frames flows around . The researcher's work lives in the product, not in a Notion page nobody reopens.

Engineering can use the lens. This one surprises teams. Engineers who understand make better tradeoff decisions — they know which edge cases matter because they know which moments carry stakes. "Should we handle this error gracefully or just show a generic message?" depends on whether the user is in a high-anxiety moment or a low-stakes one. An engineer who knows can answer that without a meeting.

Better questions get asked without prompting. This is the real sign. Nobody is running a " exercise." Nobody is facilitating a "jobs workshop." People are just asking questions that are shaped by the lens: "What's the this addresses?" "Who's hiring this and why?" "What would make them go back to the old way?"  The thinking is everywhere.

Why most teams stall at awareness

If the signs above seem obvious, consider why so few teams get there.

The most common reason is that the team learns in a workshop but then goes back to the same planning rituals, the same review formats, the same backlog structure, the same success metrics. The rituals didn't change. And rituals are where beliefs become behavior.

A team that plans sprints around feature output will keep shipping features regardless of what they learned about jobs. A team that reviews designs by asking "is this usable?" will keep polishing interfaces that might not serve . A team that measures success by will keep optimizing the first hire without knowing whether the product earns the second one.

The knowledge is there. The rituals override it.

That's why operationalization isn't about more training. It's about changing what happens in the meetings that already exist — the questions that get asked, the criteria that determine what enters a sprint, the feedback that shapes a design, the definition of "done" that determines when work is actually finished.

Where JTBDUX comes in

on its own tells you what to pay attention to: , the forces, the , the desired . That's powerful. But it can stay trapped in research and strategy — a set of insights that lives upstream of the actual product work.

— which brings and UX together into a shared decision language — is what helps job thinking survive contact with real product work. It evaluates user experiences through a set of questions tied to : does the experience speak the user's language? Does it match their mental model of how the work should flow? Does it reduce anxiety at the moments anxiety peaks? Does it show in terms the user would recognize?

Those questions aren't research questions. They're design review questions. Sprint planning questions. Acceptance criteria questions. Engineering tradeoff questions. They're the bridge between "we understand " and "we plan work differently because of it."

Without that bridge, stays in the research phase. Teams produce beautiful jobs maps and forces diagrams that sit in a deck while the product team builds whatever was already in flight. The language changes. The decisions don't.

With the bridge, job thinking becomes part of how the team actually works — not as an extra step or an additional ceremony, but as a set of questions that get woven into the rituals that already exist.

What this means for you

If you're reading this and recognizing your team — you've learned the concepts but the decisions haven't changed — the fix isn't more workshops. It's changing what happens in the rooms where work gets shaped.

That means changing how sprints get planned: what questions determine whether work enters the sprint, and what "done" means when it's over. It means changing how designs get reviewed: what the first question is, and what kind of feedback the team gives.

It means changing how research gets used: whether job insights end up in the product or in a filing cabinet. It means changing how engineering frames tradeoffs: whether the team knows well enough to make judgment calls without escalating every decision.

Each of those changes is small on its own. None of them require a giant organizational transformation. They require changing one or two questions in a meeting that's already on the calendar.

Most teams don't fail to learn . They fail to operationalize it. The difference between the two is whether it changed what you know or changed what you do.

Was this page helpful?