Forget Product Market Fit. Look For Job-Market Fit

Everybody talks about product-market fit. But not a lot of people can explain what it actually means.

Marc Andreessen offered up the canonical definition in 2007: "Being in a good market with a product that can satisfy that market." Sean Ellis operationalized it a few years later with his 40% test — if 40% of your users would be "very disappointed" without your product, you probably have PMF.

Both describe what fit looks like when you have it. Neither one explains what's structurally producing it, why it sometimes erodes, or how to tell the difference between real fit and a flattering set of metrics.

In the AI gold rush, when buyers and users alike are inundated with “AI-powered” everything, understanding the difference between real and fake fit is even more critical. A product can get attention, signups, demos, and usage because it feels futuristic or impressive. But early pull around AI does not automatically mean the product is in the long term, and it definitely doesn't mean it’s helping anyone make in a Job.

When you look at products that actually sustain fit over time — not just a good launch quarter, but years of people rehiring the product — they tend to be hyper specific.

Linear didn't set out to "fix project management." It went after one sharp job: help small, fast-moving dev teams track work without hating their tool.

From a distance, that looks like classic PMF. Strong word-of-mouth, high engagement, loyal users. Through a lens, what it really has is — a specific job, in a specific segment, paired with an experience that deserves to be hired over the alternatives.

That reframe changes what you can see. And what you can see determines what you can fix.

This article introduces that reframe — what is, what makes it hold or erode, and how to diagnose it instead of just hoping for it.

Each article in this category builds on this foundation and helps you understand whether you have , diagnose why it's eroding when the metrics start slipping, and know exactly what to do to strengthen it.

"Market" is doing too much work

Andreessen deliberately kept "market" fuzzy. That was the point — founders obsess too much over product and too little over whether a market exists in the first place.

But for product teams actually building and iterating, "market" is a blunt instrument.

Inside any given market label — collaboration, CRM, dev tools, AI assistants — there are multiple, overlapping jobs:

  • "Keep my team on the same page day to day."
  • "Produce a client-ready report without embarrassing myself."
  • "Clear my inbox before my first meeting."
  • "Summarize what changed since I last checked this project."

Some of those jobs are intense and frequent. Some are rare and low-stakes. Some matter more to buyers; some matter more to end-users. Two products can both claim "we're in the same market" and still be solving completely different jobs.

When you say "we have PMF in the SMB market," you're really saying "some set of jobs for some set of people in SMB seems to be working." That's fine at the portfolio level. It's not very helpful when you're what to build next or predicting whether your current fit will hold.

makes the unit smaller and more honest. It forces you to ask:

  • For which job, exactly, do we seem to have fit?
  • For which segment and ?
  • How strong is that job, and how well does our experience really do it?

What job-market fit actually is

Here's a more precise way to say it:

is the state where a product's experience so effectively helps a specific group of people execute a specific job, in a specific , that they prefer it over all meaningful alternatives — and keep hiring it for that job.

A few things are hiding in that sentence.

has to be specific. Not "communication" or "productivity." More like "get a meeting on the calendar without back-and-forth" or "pull together a narrative for my board deck." has a before, a struggle, and an after.

The group and have to be specific. "Developers at early-stage startups who live in their issue tracker" is a different universe from "project managers at a 5,000-person enterprise." Same job label, completely different experience requirements.

It's experience, not features. A product can technically do and still be a bad hire if the path through it — defaults, copy, affordances, error states, edge cases — doesn't line up with how people actually try to do .

The alternatives include everything. The official competitor, the unofficial , the old tool they tolerate, the manual spreadsheet they cling to. only exists to the extent you've given them a reason to fire all of those.

Product-market fit, in this view, is what it looks like from the outside when you have on the inside at enough scale.

Three structural levers

makes PMF inspectable by breaking it into three things you can actually reason about: job intensity, experience fit, and switching forces.

Job intensity

Not all jobs are created equal. Some are annoying but ignorable ("clean up my bookmarks"). Some are existential ("close the books correctly so I don't get fired").

When you examine a job, you can ask:

  • How painful is the current way of doing this?
  • How frequent is it?
  • How visible are the stakes if it goes wrong?

High-intensity jobs have a particular feel. People are already hacking workarounds together. They complain unprompted. They'll put up with rough edges if you give them real relief.

If you land on a high-intensity job, PMF tends to look like strong retention (they keep rehiring you), high "very disappointed" scores if you take it away, and willingness to pay that tracks with how much pain disappears — not how many features you ship.

If you land it on a low-intensity job, your graphs can still look good for a while — especially in hype cycles — but the fit is fragile.

This is where becomes the real diagnostic. A sustains fit through rough patches, less-then-perfect onboarding, missing features. A demands a flawless experience, and even then the user might drift away because was never pressing enough to build a habit around.

Job intensity is the underlying pressure that makes your PMF metrics move. You want to know what it actually is, not just infer it from usage curves.

Experience fit

Once you know , the next question is: how well does your experience actually help someone execute it?

That's where lives. You can walk the path:

  • Trigger: what's happening in someone's world when this job starts?
  • Steps: what do they do, in what order, with what tools?
  • Success: how do they know they've succeeded, functionally and emotionally?

Then you lay your UX on top:

  • Are we meeting them where starts, or asking them to do extra translation work?
  • Do our defaults, flows, and surfaces match the way naturally unfolds?
  • Are we reducing and anxiety, or adding it?

A product can check every box on a feature comparison chart and still have poor experience fit. It technically does , but in a way that feels brittle, confusing, or exhausting. That's where you get half-hearted PMF: people stick around because there's nothing better, not because you're a great hire.

Experience fit is where you have the most leverage as a product and UX team. You usually can't change the intensity . You can radically change how it feels to get that job done.

Switching forces

The third lever is the surrounding physics: what makes someone switch into your product for a job, and what makes them stay or leave?

can help you figure this out:

  • — the frustrations with the current solution that make someone look for options.
  • Pull — the promise of your product's outcome and experience.
  • Habit — the of existing routines and tools, even if everyone complains about them.
  • Anxiety — fears about switching: loss, reliability, social risk, "what if this doesn't work?"

You can have a high-intensity job and a decent experience and still fail to achieve durable because habit is too strong or anxiety is unaddressed. A deeply embedded incumbent that the whole org is trained on. An AI tool that feels like a black box in a critical workflow.

Conversely, you can sometimes win with a merely "good enough" experience if and pull are overwhelming and habit/anxiety are low. That's when you see products grow faster than their polish would suggest.

When you map PMF back to these forces, you get a better explanation for your numbers. Slow growth but high retention? and pull are probably strong but habit and anxiety are still barriers — an onboarding and trust problem, not a "no PMF" verdict. Fast trial signups but poor retention? Pull is strong, but intensity is lower than you thought or the experience fit is shallow.

PMF metrics are necessary but incomplete

None of this means you ignore the usual markers. Retention, engagement,and  revenue growth are still clean external signals.

But those metrics are lagging — they tell you what happened, not what will happen if , , or alternatives change. And they're aggregated — they blur together multiple jobs and segments into one verdict.

gives you a way to interpret those numbers. "We're seeing great retention in this cohort because they're all doing Job X in Y, and our experience fits that extremely well." Or: "Growth is flattening because we fit is saturated in our current segment, and there are other jobs in the same 'market' we haven't earned the right to serve."

PMF is the scoreboard. is the game film you review to understand why the score looks the way it does.

How to apply this

You don't need a whole new canvas. You can start with a few questions in your next product review:

  • Name you think you have PMF for. Not "design collaboration" or "AI for sales." Something specific like: "help a small team plan a week of work without surprises," or "turn a messy customer conversation into a clean follow-up email."
  • Ask how intense that job really is. What were people doing before? How often? What happened when they got it wrong?
  • Walk with your product open. Step through it and notice where the experience helps, where it forces workarounds, and where it introduces friction or anxiety that shouldn't be there.
  • List the alternatives people are actually firing. Not just official competitors. The spreadsheet, the shared doc, the internal tool, the "just ping me on chat" . Are you really giving them a better hire?
  • Write down the forces. What's pushing them away from alternatives? What's pulling them to you? What habits and anxieties do they have to overcome to switch and stay?

If you can answer those with clarity for at least one job and one segment, you're not chasing PMF in the abstract anymore. You're doing real work.

Was this page helpful?