"Is This Usable?" Isn’t The Best Design Review Question

Most design reviews go like this:

Someone presents a screen. The room reacts. One person doesn't like the spacing. Another suggests a different color for the CTA. Someone asks about the edge case where a user has no . Someone else says "it feels cluttered." The designer defends the hierarchy. A PM asks if it matches the competitor's version.

Forty-five minutes later, the team has opinions about the interface. Nobody has asked whether the interface helps the user make on the thing they hired the product to do.

That's a taste debate, not a design review.

"Is this usable?" isn't a bad question. But it's too shallow on its own. A design can be perfectly usable — clear labels, logical flow, consistent patterns — and still fail . Because usability tells you whether the user can operate the interface. It doesn't tell you whether the interface is oriented toward the right moment, the right stakes, and the right kind of .

The question that changes everything

The fastest way to make a design review useful is to change the first question.

Instead of starting with "is this usable?" or "does this look right?" — start with: what job is active here? What is the user trying to make on in this specific moment, under what constraints, with what at stake?

That one question reframes the entire conversation. Suddenly the review isn't about whether the button is in the right place. It's about whether the screen understands why the user is here.

And from there, a different set of questions becomes natural:

  • What anxiety is peaking at this moment — and does the design reduce it or ignore it?
  • Does this make visible, or does the user have to guess whether the thing worked?
  • Does this reduce effort, or does it just reorganize effort into different steps?
  • Does this help the user trust what's happening enough to act on it?
  • Could the user explain this output to someone else — a manager, a client, a teammate — and defend it?

Those questions aren't about the interface. They're about whether the experience supports . And they produce a completely different kind of feedback than "I think the text is too small."

Usability-only reviews miss the point

Usability is necessary. Nobody's arguing against clear labels, consistent patterns, and error prevention. Those are real requirements and they matter.

But usability is the floor, not the ceiling. A flow can be easy to navigate and still fail because it doesn't match how the user thinks about the work. A form can be clear and still fail because it asks for information the user doesn't have at this point in .

A dashboard can be well-organized and still fail because it shows the product's model instead of the the user is trying to track.

— which evaluates user experiences through a set of questions tied to the user is trying to do — is what closes this gap. It asks questions that usability heuristics alone don't cover: does the experience speak the language ? Does it match the user's mental model of how the work should flow? Does it reduce anxiety at the moments when anxiety peaks? Does it show in terms the user would recognize?

Those aren't aesthetic judgments. They're fit judgments. And they're the difference between a design review that polishes the surface and one that catches the deeper problem: the screen works fine, but it doesn't understand the moment.

No, you don't need to overhaul your review process.

You just need to add one step at the beginning.

Before anyone reacts to the design, the presenter names . Not the feature being designed — the situation the user is in. Something like: "The user just got a notification that a deadline slipped. They need to figure out what's blocked and who owns it before a meeting in thirty minutes. They're stressed and they need to look prepared."

Now the room has a lens. Every piece of feedback gets evaluated against that situation. "The spacing feels off" becomes "does the spacing help the user scan for the blocked item fast enough?" "It feels cluttered" becomes "is there information on this screen that doesn't matter in this moment, and is it competing with the information that does?"

The feedback gets sharper because the lens is sharper. And the designer gets more useful input — not "I don't like this" but "in this moment, the user needs X and I don't think they can find it."

This is also where you catch problems that usability reviews miss entirely. A screen might be beautifully organized, but if the user's mental model says "show me what's wrong first" and the screen leads with a summary of what's fine, the hierarchy is backwards — not because it's bad design, but because it doesn't match how the user thinks about this specific job at this specific moment.

The six questions that replace "does this look right?"

If you want a concrete set of questions to bring into your next design review, these six will do more work than an hour of interface feedback:

What job is active here? Not "what feature is this" — what is the user in the middle of doing, and what triggered this moment? If the presenter can't answer this, the design isn't grounded yet.

What anxiety is peaking? Every job has moments where the user's confidence dips — committing to an action, sending something to someone else, making a decision with incomplete information. The design should be reducing that anxiety, not ignoring it.

Does this make visible? Can the user tell that the thing they just did moved them forward? Or does the screen look the same before and after the action? that's invisible to the user might as well not have happened.

Does this reduce effort or just move it around? Some redesigns take effort out of one step and add it to another. The total burden doesn't change — it just shifts. The review should catch when that's happening.

Does this help the user trust the outcome? Trust matters most in professional contexts where the user has to act on what the product shows them — send the report, approve the request, make the call. If the user can't tell where the numbers came from or whether the filters caught everything, they'll verify the output somewhere else. That's the product failing while passing the usability test.

Does the language match how the user talks about this work? If the screen says "configure workflow parameters" and the user says "set up my process," there's a translation tax on every interaction. The design review should catch when the product is speaking its own language instead of the user's.

Build organizational muscle, fast

Most adoption efforts start with training. Workshops. Slide decks. Certification programs. Those create awareness, but they don't change behavior — because the daily rituals stay the same.

Design reviews happen every week. They're already on the calendar. They already involve designers, PMs, and engineers in the same room looking at the same work. If you change the questions that get asked in that room, you change how the team thinks about design — not in a workshop that happens once, but in a ritual that happens constantly.

That's what makes design reviews the fastest path to building muscle. You don't need a giant organizational transformation. You don't need everyone to read a book. You need six questions, asked consistently, in a meeting that's already happening.

The first few times will feel awkward. Someone will present a screen and the room won't know how to answer "what job is active here?" That's fine. The awkwardness is the learning.

After a few weeks, the team starts anticipating the questions. Designers show up with already framed. PMs back when work enters the sprint without a clear job step. Engineers ask "what anxiety are we reducing?" before writing code.

That's how stops being a framework the team learned and starts being a lens the team uses.

Was this page helpful?