Designing the Wait: Using Friction to Build Trust in AI
You ask an AI a hard question. It answers instantly. And instead of feeling impressed, you feel… suspicious.
Because in the real world, hard problems don’t get solved in half a second. A lawyer pauses. A senior engineer squints. A doctor asks a follow‑up. Even experts need a moment to think. So when an AI spits out a confident paragraph immediately, it can trigger the exact opposite of trust: hallucination vibes.
In —the convergence of Jobs‑to‑be‑Done and UX—this is a classic mismatch between the user’s (confidence, safety, “don’t make me look dumb”) and what the interface is signaling. The user isn’t just hiring the tool to produce an answer. They’re hiring it to help them act without regret.
Why AI needs a little friction to earn trust
Speed is usually the god of UX. We spend time and money shaving milliseconds off load times. We optimize databases. We pre-fetch content. The golden rule is simple: faster is better.
But in the age of AI, that rule breaks in a very specific way. When an AI answers a complex, high-stakes question instantly, it doesn’t feel like magic. It feels like a lie.
If you ask a human expert to analyze a legal contract and they hand you a verdict in half a second, you don’t think they’re a genius. You think they didn’t read it. The same psychology applies to software.
It comes down to trust. And paradoxically, the fastest interface often does the worst job of protecting that.
The trust problem AI creates
Classic software fails in familiar ways: bugs, missing features, confusing navigation. AI tools fail in a newer, more corrosive way: they can be wrong with confidence. That means the user is constantly running an internal risk calculation:
- “If I trust this and it’s wrong, what happens to me?”
- “Do I get embarrassed?”
- “Do I lose money?”
- “Do I ship broken code?”
- “Do I tell my boss something false?”
That’s why “designing the wait” matters. It’s the interface saying: “You’re not crazy to be cautious. This is worth taking seriously.”
Designing the wait = designing trust
In terms, the AI needs to feel like a partner, not a magic trick. A trustworthy partner does three things consistently:
- Makes their state visible
- Sets expectations about what they’re doing
- Gives you control at the moment you’re most exposed
A blank spinner does none of that. It just says “something is happening.” And in an AI product, “something” is exactly what users don’t trust. So the question becomes:
What should happen in the 3–5 seconds between the prompt and the output so the user feels safer, not just faster?
Replace spinners with “.”
A spinner implies network latency. It doesn’t imply thinking. It doesn’t imply checking. It doesn’t imply grounding. What builds trust is a receipt: evidence that the system did work that maps to the user’s request.
Take Perplexity. When you ask a question, it doesn’t just answer. It shows you what it looked at. The sources are the receipt. Even when the “wait” is short, that moment of visible work changes the emotional experience from:
“You made that up.” to “Okay, you actually looked somewhere.”
Calibrate speed to risk (and don’t pretend all prompts are equal).
Not all AI interactions carry the same stakes.
- “Write a funny caption for this photo” is low-risk. Instant is fine.
- “Summarize this contract and tell me what to sign” is high-risk.
- “Generate code that touches production” is high-risk.
- “Draft an email to an angry customer” is high-risk.
In high-risk moments, instant output can feel reckless. The user doesn’t want speed. They want confidence. So the “wait” becomes a deliberate choice: a brief pause paired with a visible state that communicates checking, reviewing, or verifying.
Not because the model needs time, but because the user needs assurance that the tool respects the gravity .
Make the wait a control surface, not a delay.
The most trust-building “wait” isn’t passive. It gives the user a way to steer, constrain, or correct before the output hardens into something dangerous.
For example, instead of “Thinking…” you can use that moment to surface:
- what the AI believes you’re asking for (“Drafting a reply that apologizes and offers a refund…”)
- what assumptions it’s making (“Assuming the customer is on the Pro plan…”)
- the tone it’s using (“Firm / Neutral / Warm”)
- the scope of action (“Draft only, won’t send”)
That turns the wait from a void into a collaboration moment, and it directly supports the : “Keep me in control so I don’t regret trusting you.”
The goal isn’t slower AI. It’s safer AI..
If you’re building an AI tool, stop asking only: “How do we make this faster?”
Start asking: “What does the user need to see, feel, and control before they’re willing to rely on this?”
That’s the difference between an AI product people demo, and an AI product people adopt and trust with real work.