Don't Add JTBD to Sprint Planning. Rewrite Sprint Planning Around It.

Teams tend to  adopt by adding a token question to an existing ritual.

Somewhere between estimating story points and assigning tickets, someone asks: "So... what's here?" The room pauses politely. Someone offers a sentence that sounds vaguely user-centered. It gets written in a Confluence doc. The sprint continues exactly as it would have without the question.

Two quarters later, leadership asks why the investment hasn't changed anything. The answer is obvious from the outside: you didn't change the planning conversation. You added a garnish to the old one.

doesn't work as a box to tick. It works as a lens that changes what the team argues about, what gets prioritized, and what "done" means. If the sprint planning conversation still centers on "what are we shipping?" and the question is a sidebar, nothing will change — because the planning structure itself is designed around output, and is about .

You don't add to sprint planning. You rewrite sprint planning around it.

The problem with output-centered planning

Sprint planning, as most teams practice it, is a conversation about work. What are we building? How long will it take? What are the dependencies? Who owns it? Can we fit it in the sprint?

Those are real questions. They matter. But they're all about the team's output — not about what changes in the user's world when the work ships.

That's how you end up with sprint goals like "Launch dashboard redesign" or "Ship integration with Salesforce" or "Complete onboarding flow v2." Each one describes what the team will produce. None of them describe what the user will make.

And when the goal is output, the definition of success becomes output. The sprint "succeeded" because the thing shipped. Whether it helped anyone make on — whether it reduced friction, closed a trust gap, or made the next step obvious — is a separate conversation that usually doesn't happen until retention numbers look bad weeks later.

What changes when you plan around the job

The shift isn't adding a question to the existing agenda. It's changing the questions the team asks before work enters the sprint.

Instead of starting with "what are we shipping this sprint?" the team starts with a different set of questions:

  • What job step are we helping with?
  • What friction are we reducing?
  • What trust gap are we closing?
  • What are we helping users make?

Those questions do something a feature list can't: they force the team to name why the work matters in terms the user would recognize. Not "we're building a notification system" but "we're helping users know something changed without having to check manually." Not "we're redesigning the settings page" but "we're reducing the setup tax that's blocking first-time users from reaching real ."

— which evaluates user experiences through a set of questions tied to the user is trying to do, like whether the experience speaks the user's language, shows , and reduces anxiety at the moments it peaks — is what makes this practical instead of philosophical.

It turns job insight into a usable planning filter. Without it, "understand " stays in a research deck. With it, shapes what gets built, how it gets built, and how you know if it worked.

Sprint goals that sound different because they mean different things

When sprint planning is organized around , the sprint goal itself changes.

An output-centered sprint goal sounds like: "Complete onboarding redesign." It tells you what the team is building. It doesn't tell you what the user gets.

A job-centered sprint goal sounds like: "Reduce abandonment in first use by removing the setup tax." Or: "Increase confidence at the moment of commitment." Or: "Make visible so users know it worked."

Those goals are harder to write because they require the team to know what the user is struggling with. They're also harder to declare "done" — because you can't just point to a shipped feature and call it success. You have to ask whether the friction actually decreased, whether the trust gap actually closed, whether the user actually made more than they did before.

That accountability is the whole point. It's also why teams resist it — because output goals are easier to hit and easier to celebrate. But output goals that don't connect to job are how teams ship a lot and move nothing.

How it changes what gets into the sprint (and what gets kept out)

The hardest part of sprint planning is usually saying no. Every ticket in the backlog has a reason behind it. Someone requested it. A competitor has it. An engineer is excited about it. A stakeholder is pushing for it.

When the planning conversation is organized around output, every one of those tickets can claim a spot. They're all "work." They're all "" in the sense that they produce something.

When the planning conversation is organized around , the filter gets sharper. The question isn't "is this good work?" — it's "does this help users make on they hired us for?"

That question kills nice ideas early. The feature that sounds reasonable in a meeting but doesn't map to a real . The competitive checkbox that matches a screenshot but doesn't serve the user's actual job. The power-user request that's really a for a problem the product should solve differently.

It also surfaces work that output-centered planning would miss. The copy change that reduces anxiety at a commitment point. The empty-state redesign that shows instead of blankness. The error message rewrite that helps users recover instead of stalling.

That kind of work rarely makes it into output-centered sprints because it doesn't look like a "feature." But it's often the work that most directly reduces friction on .

What this looks like in practice

What does this actually change on Monday morning? Sprint planning starts the same way — the team gathers, looks at the backlog, discusses what's next. But the first question changes.

Before any ticket is discussed, someone asks: "What's step we're focused on this sprint?" That frames the whole conversation. Every ticket that comes up gets evaluated against it. Does this help with that job step? Does it reduce friction at that moment? Does it close a trust gap the user is hitting? Does it make more visible or more repeatable?

Some tickets pass easily. Some get deferred because they serve a different job step and this sprint has a focus. Some get rewritten because the original ticket was framed as a feature ("add export to PDF") and the team realizes the actual job step is "help the user produce something they can defend in a meeting."

The debrief changes too. Instead of "did we ship the thing?" the team asks "did the thing we shipped help users make ?" That might take a few days to answer. It might require looking at behavior or talking to users. But it closes the loop between what the team built and whether it mattered — which is the loop that output-centered planning leaves open.

This is an adoption problem, not a training problem

Most teams that "know " still plan sprints the old way. They've read the books. They've done the interviews. They have job statements. They might even have a four forces diagram on a whiteboard somewhere.

But the planning ritual didn't change. And rituals are where beliefs become behavior. If the planning conversation still centers on "what are we shipping," the knowledge stays theoretical — a research artifact that lives in a deck instead of shaping what gets built.

The question isn't "does the team understand ?" It's the harder one: "How do I get my team to actually use this and not just nod along?"

The answer is to change the ritual by rewriting what the team argues about, what earns a spot in the sprint, and what "done" means when the sprint is over.

Was this page helpful?