Which Part of Your Fit Is About to Break?

Peloton had every PMF signal you could ask for. Retention was strong. Revenue was surging. Users were evangelical. And then, in less than a year, demand evaporated so fast the company paused production.

The fit was real. It just wasn't what it appeared to be. One dimension was carrying the others — and when that dimension changed, the whole thing collapsed.

This is the problem with treating fit as a single verdict. "We have PMF" or "we don't" is useful at the extremes, but most products live in the middle — retention that's decent but not great, growth that's real but slower than expected, users who value the product but don't talk about it unprompted.

A binary assessment has nothing to say about which part of the fit is strong, which part is weak, and which part is doing all the work.

A Jobs-Fit Alignment Model (JFAM) can help you assess fit across four independent dimensions: job intensity, solution completeness, experience quality, and match. Each can be strong or weak on its own. And the fix for each is completely different.

A product with weak job intensity needs a different intervention than a product with weak match. Treating both as "pre-PMF" produces the same prescription for fundamentally different problems.

Job Intensity: Is the Struggle Real Enough?

Job intensity measures how acutely people feel the product serves. How much friction, , or cost are they absorbing in their current situation? Where does this job sit on ?

High-intensity jobs produce users who tolerate imperfection. They'll put up with bugs, clunky onboarding, missing features — because the relief is worth it. Low-intensity jobs produce users who might try a product but won't fight for it. Won't pay for it. Won't recommend it. Won't come back.

Quibi is the clearest recent example of what weak job intensity looks like at scale.

Jeffrey Katzenberg and Meg Whitman raised $1.75 billion to build a short-form mobile streaming platform. Premium content, Hollywood talent, ten-minute episodes designed for phones. The product launched in April 2020. It shut down six months later with roughly 500,000 subscribers against a projection of 7.4 million.

"watch premium content in ten-minute increments on my phone" existed. But it was being served well enough by YouTube, TikTok, podcasts, and social video that the struggle with existing alternatives wasn't acute. Users weren't suffering without Quibi. They were mildly entertained without it.

That's the low end of . And at that end, even $1.75 billion can't buy you fit.

High job intensity doesn't guarantee success. But its absence almost always explains failure. When users describe their situation before finding your product as "fine, I guess" rather than "genuinely painful," you have an intensity problem — and no amount of polish or marketing will compensate.

During the pandemic, Peloton had unmistakable job intensity. Gyms were closed. People wanted to maintain fitness routines. The of having nowhere to work out was acute and daily. This dimension was strong.

Solution Completeness: Does It Actually Finish the Job?

Solution completeness measures how fully the product resolves — whether it gets the user all the way to the outcome they were trying to reach, or drops them partway through.

A product can serve a high-intensity job and still have low solution completeness.

Early GPS navigation apps served the high-intensity job of getting somewhere without getting lost. But they didn't account for real-time traffic. They didn't know about police activity, road closures, or the accident that just happened ten minutes ago. Drivers still listened to the radio. They still called ahead. was high-intensity and the solution was partially complete.

Which is why Waze, when it arrived with crowdsourced real-time intelligence from millions of drivers reporting speed, police, accidents, closures, and hazards, felt not like a marginal improvement but like the thing navigation had always been trying to become.

Incomplete solutions produce a specific, observable behavior: workarounds.

If users are supplementing your product with something else — a spreadsheet alongside the tool, a message to clarify what the software generated, a manual check on something the product should have handled — the solution is incomplete.

The is the gap made visible. It's also one of the most reliable diagnostic signals in product work: find the , find the incompleteness.

Peloton's solution completeness was genuinely strong. The bike plus the live and on-demand classes delivered a real workout — not a compromise version of going to the gym. Users didn't need to supplement it with something else.

Experience Quality: Can People Actually Use It?

Experience quality measures whether the product delivers its solution in a way that's navigable, trustworthy, and emotionally tolerable — whether the journey to 's completion is as good as the destination.

Healthcare.gov is the textbook case of what happens when experience quality fails everything else.

It launched on October 1, 2013, with the genuine and high-intensity job of helping millions of uninsured Americans enroll in health coverage. The solution was technically complete — the coverage was real, the plans were available. intensity couldn't have been higher. People needed this.

But the experience was so broken that enrollment was functionally impossible for most users in the first weeks. Pages wouldn't load. Sessions timed out. Error messages were incomprehensible. The system crashed under demand it should have been built to handle.

Maximum job intensity. Genuine solution completeness. Total experience quality failure.

This is the dimension that's hardest to evaluate from the inside, because teams tend to assess their own experience more generously than users do. It's also the dimension that erodes most silently.

BlackBerry had strong experience quality for mobile email in 2007. Then the iPhone appeared and reset what "experience quality" meant for a mobile device. didn't change. The solution didn't disappear. But the experience standard shifted, and what had been good enough became visibly insufficient.

The bar for experience quality isn't static. It's set by whoever in the category is raising it.

Peloton's experience quality was excellent. The instructors, the community, the leaderboard, the production values — it didn't feel like a substitute for the gym. It felt like its own thing. This dimension was strong too.

Context Match: Does the Product Fit Where the Job Happens?

match measures how well the product fits the specific circumstances — physical, social, temporal, professional — in which actually arises.

This is the most underweighted dimension in standard PMF thinking. It's also the one that produced one of the most documented product failures of the past decade.

Google Glass, launched to the public in 2014, served real jobs. Quick information access, hands-free photography, navigation without looking down at a phone. intensity was real for specific use cases. The solution completeness was reasonable for a first-generation product.

The was wrong.

Wearing Glass in public made other people uncomfortable. Bars banned it. People who wore it got called "Glassholes." Wired journalist Mat Honan, who wore Glass for most of 2013, wrote that it made people uncomfortable.

The product required wearing a camera on your face in social settings that had no norms for it, in an era when being recorded without consent was generating real public anxiety.

The in which arose — daily public life — was the exact the product couldn't survive.

Enterprise Glass, deployed later in manufacturing and healthcare settings where was controlled and jobs were specific, worked considerably better. Same product. Different . Different result.

match is easy to miss because it's invisible when it's working. You only notice it when it fails — and by then, the failure looks like a product problem when it's actually a where-and-when problem.

Peloton's match during the pandemic was nearly perfect. Everyone was home. The living room became the gym. The in which the fitness job arose — stuck at home, no alternative — was the exact the product was built for. This dimension looked as strong as the others. But it was the only one that was temporary.

When the Load-Bearing Dimension Breaks

The real diagnostic value of JFAM is that it reveals which specific dimension is fragile — and fragile fit is the most dangerous kind, because it looks exactly like real fit until the condition sustaining it changes.

You've seen Peloton score strong on all four dimensions. Job intensity, solution completeness, experience quality, match — all genuinely good. Every PMF metric confirmed it. The stock hit an all-time high near $171 in January 2021, market cap close to $50 billion.

But match was the load-bearing dimension. And it was temporary.

When gyms reopened, the that made the fitness job so urgent — stuck at home, no alternative — dissolved. And when it dissolved, job intensity dropped with it. The "I have nowhere else to go" desperation was gone. The product hadn't changed. The had. And was doing more work than anyone realized.

By January 2022, the stock had crashed through its $29 IPO price. CNBC reported Peloton had temporarily paused production of bikes and treadmills because demand had evaporated. Revenue peaked at $4 billion in fiscal 2021 and has declined every year since.

None of the standard PMF metrics — retention, engagement, revenue growth, — could have told you which dimension was carrying the others. They all looked strong. Until one of them disappeared.

Using JFAM as a Diagnostic

Each dimension, when weak, points toward a different kind of problem and a different kind of fix.

Weak job intensity is a positioning or market selection problem. The product may be good, but it's aimed at a job people can tolerate not solving. The question to ask: does a higher-intensity version of this job exist? Is there a segment where this problem is existential rather than merely annoying?

Weak solution completeness is a product scope problem. Users are getting partway to the outcome and falling back on workarounds. The question to ask: where does the product drop the user? What are they doing to compensate, and what would it take to make the unnecessary?

Weak experience quality is an execution problem. is real, the solution is complete, but the path to getting there costs too much cognitive effort, emotional energy, or time. The question to ask: is the experience failing at entry — onboarding, first use — or across the full lifecycle?

Weak match is a distribution or positioning problem. The product may work well in controlled conditions but fail in the actual circumstances where arises. The question to ask: is there a where the product fits naturally, and is that large enough to build a business on?

The point of evaluating these aspects of fit separately isn't to add four more boxes to your product review. It's to stop treating PMF as a single verdict and start asking which dimension is strong, which is weak, and which one is carrying the others.

Fit isn't actually a feeling you arrive at. It's a set of specific alignments that you can measure, monitor, and lose — one dimension at a time.

Was this page helpful?