Stop Asking "Are You Satisfied?" Start Asking "Are You Making Progress?"
Every quarter, the customer success team runs the surveys. The Customer Satisfaction Score (CSAT) comes back strong at 4.2 out of 5. () is healthy. The promoter count is climbing. The slides get made. The executive team nods. "Customers love us."
Then renewal season arrives and three accounts that scored high on don't renew. The reason given on the exit survey: "Going in a different direction." Which tells you precisely nothing.
So what happened? The surveys weren't wrong. They measured what they asked. CSAT captured how individual interactions felt, and those interactions were fine. captured whether they'd recommend the product, and they would have, at the time they were asked.
But neither survey asked the question that would have predicted the non-renewal: is this product making your job easier than it was before you started using it?
That's a different question. CSAT measures the touchpoint. gets closer to the outcome, as someone who'd recommend a product is probably getting real value from it.
But gives you a score without telling you why. A 9 could mean "this product saves me two hours a week" or "the interface is pretty." You know they'd recommend it. You don't know what the product is delivering, which dimension it's helping with, or where it's falling short.
A survey helps fill that gap.
The Scope Gap
Calm is a good example. After any given meditation session, a user might rate the experience highly — the interface is lovely, the narration was soothing, the ten minutes felt worthwhile. That's real. The session was good. If you asked a CSAT question right then, you'd get a high score.
But the question that predicts whether they renew is a different framing: "Am I meditating more consistently than I was before I had this app?" That's a question — and it's the one CSAT wasn't designed to ask.
A survey is built on a before/after comparison. It doesn't ask the customer to rate you. It asks them to judge change in their own ability to get done.
The core question: Compared to before you started using this product, are you getting done faster, more easily, or more confidently?
That shift does something neither CSAT nor can do on their own — it connects the measurement to the customer's world. Their work. Their pressure. Their definition of "done." Not your features, not your interface, not your support team. The outcome they hired the product to help with. tells you they'd recommend it. A survey tells you why, and where the product is falling short.
And isn't just one thing. For some jobs, is speed — getting it done faster. For others, it's confidence — feeling more certain the output is right. For others, it's quality — fewer errors, less rework, fewer "wait, are we sure?" moments.
evaluates experiences through a set of questions that connect to these dimensions directly: does the experience match how naturally unfolds (speed)? Does it reduce anxiety at the moments anxiety peaks (confidence)? Does it show the user clearly enough what's happening that they can trust the output (quality)?
Pick the one or two dimensions that represent your product's promise — the reason someone would switch to you — and measure those. Not a general satisfaction score. The specific kind of your product was hired to deliver.,n
The Scariest Answer Is "No Change"
surveys produce a result that looks different from CSAT and requires different interpretation.
A high score means: we're helping them get done better than they could before. A low score means: the product isn't making easier, faster, or more reliable.
And "no change" is not neutral.
"No change" means the product hasn't made a measurable difference to the outcome. The user might not be dissatisfied — nothing is broken, the sessions are fine. But "fine" isn't what earns renewal. Making easier is. And "no change" means the product hasn't done that yet. That's a subscription one budget review away from being questioned.
This is where surveys become an early-warning system. Touchpoint-level CSAT might not move — each individual session still works, nothing visibly breaks. But the score shows the trend: the product is making less of a difference than it used to.
Why? There could be many reasons. 's shifted. The team's needs evolved. Competitors caught up. The product still works session by session. It just isn't moving the needle on the outcome anymore.
Touchpoint-level CSAT isn't designed to catch that — it's measuring a different thing at a different scope. surveys catch it because they're asking the outcome question directly: is the product making a difference?
And the open-ended follow-up — "what's the main reason for your answer?" — is where the real signal lives. Customers will tell you what means to them in their . They'll tell you what's blocking them. They'll tell you what they're comparing you against. That's job-friction — and job-friction tells you what to fix next.
When to Ask and What to Ask
The timing matters more than the question count. Ask after the moment the user attempts to get done — after completion, handoff, publish, resolve, send, close. Not after login. After the moment where an outcome of value should have been produced, because that's when the user has fresh evidence about whether the product helped.
The survey itself should be short enough that it doesn't feel like work:
One rating: "Compared to before, are you able to [do ] faster / about the same / slower?" Or on a five-point scale from "much less " to "much more ."
One diagnostic question: "What's the main reason for your answer?"
And segment who you sample — because means different things at different stages. New users tell you whether the product helps them start getting done. Established users tell you whether it's still making a difference. Power users tell you where breaks at scale.
Mix those together and your averages will look calm while your customers split into "this is essential" and "this didn't change anything."
CSAT is useful for what it measures — sentiment about the interaction. gets closer to the outcome — someone who'd recommend a product is probably getting value. But neither tells you the specific answer to some very important questions: is the product making easier, in what dimension, and where is it falling short?
A survey does. It's the instrument that connects measurement to — not "how do you feel about us?" but "are you getting this done better than before?" That's the question that tells you what to fix, what to double down on, and whether the product is earning its place in their workflow.