Activity Is Not Necessarily Progress

Two users open your product on Tuesday morning.

User A spends fourteen minutes. She clicks through three dashboards, opens and closes a filter panel twice, visits the help center, returns to the main view, and exports a CSV. Your analytics record 47 interactions. High engagement.

User B spends six minutes. She opens one view, checks three items, updates a status, and closes the tab. Your analytics record 11 interactions. Low engagement.

Your dashboard says User A is more engaged. Your dashboard is wrong.

User A is lost. She's navigating because she can't find what she needs. She's filtering because the default view doesn't show her what matters. She's visiting help because the interface isn't communicating . She exported the CSV because she doesn't trust the product enough to use it as the source of truth. Fourteen minutes of motion. Zero .

User B knows exactly what she came for. She checked what needed checking, updated what needed updating, and left. Six minutes. Job done.

Activity metrics — DAUs, time-in-app, pageviews, feature adoption, session duration — can't tell these two users apart. They measure motion. They don't measure whether the product helped someone make on the thing they hired it to do.

And if you can't measure , you'll optimize for motion with total confidence.

The Dangerous Versions of "Engagement"

Activity can rise for two completely opposite reasons, and the dashboard celebrates both.

The product is helping users move through smoothly — usage increases because the product is genuinely central to the work. That's real.

Or the product is making users work harder to get the same outcome — usage increases because the product is generating extra labor. That's a warning sign dressed up as a success metric.

Time on page can mean deep value or deep confusion. Feature adoption can mean the feature is useful or that the user is desperately trying everything. High session counts can mean the product is essential or that the user keeps coming back because it didn't work the first time.

Salesforce is a good example. Sales reps who spend the most time in the CRM aren't necessarily the most productive. Many of them are navigating a complex interface to find information that should be obvious, entering they resent entering, and maintaining a spreadsheet alongside the system because they don't trust what they're seeing.

That's high engagement. It's also high , and the activity metrics can't distinguish between the two.

What Progress Actually Looks Like in Analytics

If activity metrics measure motion, what measures ?

means the user got closer to the outcome they hired the product for. Not "used a feature," made on . The report got produced. The decision got made. The handoff happened without a follow-up clarification. The user left the session with more confidence than they entered with.

The challenge is that most analytics are instrumented around the product's structure — which screens were visited, which buttons were clicked, which features were used. None of that tells you whether the user accomplished what they came to accomplish.

A job-completion event is different from a feature-interaction event. It's a behavioral signal that the user reached a meaningful outcome, not just touched the interface.

"Visited dashboard" is a feature interaction. "Shared a report with a stakeholder" is closer to job completion — because wasn't "look at the dashboard," it was "show someone else what's happening so they can make a decision."

"Created a project" is a feature interaction. "Moved work through a complete cycle from assigned to done with the required artifact attached" is closer to job completion, because wasn't "set up a project," it was "get this work across the finish line."

The distinction sounds obvious stated plainly. But look at your own analytics dashboard and ask how many of your tracked events are feature interactions vs. job-completion signals. For most products, it's almost entirely the first category.

This Is a JTBDUX Problem

Through the lens, activity metrics are measuring the experience without measuring whether the experience is serving .

The heuristics ask: does the experience speak 's language? Does it show toward the outcome? Does it reduce anxiety at the moment anxiety peaks? Does it match the user's mental model?

Activity metrics can't answer any of those questions. A user who clicks through five screens might be experiencing an interface that doesn't speak 's language — they're translating between what the product shows them and what they actually need.

A user with high session frequency might be returning repeatedly because the product doesn't show — they can't tell if things are on track without checking manually.

A user who visits the help center mid-flow might be experiencing anxiety the product isn't reducing. All of that registers as engagement. None of it is .

Job-completion metrics flip the question. Instead of "are they active?" the question becomes "did they get done — or a meaningful piece of it — in a way that makes them likely to come back?"

That's a harder question to instrument. But it's the only one that tells you whether the product is actually working.

The Shift

When you measure instead of activity, three things shift.

You stop celebrating the wrong numbers. A spike in time-in-app might be a problem, not a win. Low session duration might mean the product is efficient — users come in, get done, and leave. Teams that "fix" that efficiency by adding engagement hooks make the experience worse for the users who are actually succeeding.

You can diagnose where breaks. When you instrument for job completion, you can see where users get stuck. Not stuck in the interface (that's usability), but stuck on . They reach a point where the product stops helping them make and they either give up, build a , or churn. Activity metrics show a drop-off. Job-completion metrics show where and why.

You can tell the difference between fit and habit. Strong retention driven by job completion means users keep coming back because the product helps them do something they need to do. Strong retention driven by activity alone might be habit, , or sunk cost — the user is active but not making , and they'll leave the moment something better shows up.

Every metric your product tracks should be answering this question: "Is the product helping people make on the thing they hired it to do?"

Activity tells you people are moving. tells you they're arriving. If your dashboard can't show you the difference, it's not telling you what you really need to know.

Was this page helpful?