Is Your AI Chat Bot Doing a Real Job, Or Just Wasting Time?
Everyone is shipping "chat with our AI." It's the default AI interface now. A text box, a blinking cursor, and the implicit instruction: .
The problem is that "" isn't a Job. It's an invitation to flounder.
Users don't show up to AI chat with perfectly formed prompts. They show up with fuzzy goals, half-formed questions, and very specific constraints that the chat UI never surfaces. They know they want help. They don't necessarily know how to ask for it in a way the system can act on.
So they type something vague. The AI responds with something generic. The user tries again. The AI responds with something slightly less generic. Three turns in, the user is doing the hard work of translating their job into prompts — which is exactly the work the AI was supposed to eliminate.
And every failed turn erodes trust. Not slowly, the way a buggy traditional product might. Immediately. The user came ready to delegate, and the product couldn't figure out what they needed. That's a trust rupture in the first 30 seconds.
Conversational UI only becomes real trustworthy AI-native UX when it's designed around a concrete job, not the (stale) novelty of talking to a bot.
Your chat has a role. Pick one.
Most AI chat interfaces default to one mode: the terminal. The user issues a command, the system responds. This works when the user already knows exactly what they want and how to ask for it. It fails — badly — when they don't.
There are actually three roles a conversational AI can play, and determines which one is right.
Terminal. The user knows , knows the constraints, knows how to express it. "Translate this paragraph into Spanish." "What's the P/E ratio of NVIDIA?" "Convert this CSV to JSON." The AI executes. The user evaluates. Clean, fast, no ambiguity. Terminal mode is the right choice when is well-defined and the user is an expert in expressing it.
Guide. The user knows but doesn't know how to frame it for the AI. "I need to figure out why our conversion rate dropped" is a job, but it's not a prompt. A guide asks clarifying questions — what time period? which funnel? compared to what baseline? — and narrows the scope before generating output. Guide mode is the right choice when is real but the user needs help articulating the constraints.
Coach. The user isn't sure what is yet. They're thinking out loud. "I have this messy set and I don't know what to do with it." A coach doesn't just ask clarifying questions — it explains tradeoffs, suggests approaches, and helps the user figure out what they're actually trying to accomplish. Coach mode is the right choice when the user's intent is fuzzy and the conversation itself is how gets defined.
Most AI products default to terminal mode regardless of the situation. That's why users who don't already know what to ask get nothing useful — and conclude that the AI "doesn't work."
Design the first turn around the Job, not the prompt
The opening of an AI conversation is the highest-leverage design moment. It's where you either anchor the conversation in a real job or let it drift into a blank-page problem.
A blank text field with "How can I help you?" is the conversational equivalent of dropping someone into a feature-rich dashboard with no guidance. The user's brain immediately asks "what do I even do here?" — and if the answer isn't obvious, they either type something random or leave.
The fix is to surface jobs, not just a cursor. Preset chips or quick-pick options that represent real jobs: "Summarize a document." "Draft an email." "Brainstorm ideas for X." "Analyze this ." Each one sets the trajectory of the conversation and tells the AI what role to play.
Once the user picks a job — or types something that implies one — the next two or three turns should be about gathering constraints, not generating output. What's the audience? What's the tone? What's the deadline? What's the risk level? But — and this is critical — only ask for constraints that actually change the output. Every unnecessary question is friction. Ask what you need. Skip what you don't.
Keep the conversation focused on making progress in the Job
AI conversations drift. Without a job anchor, users start experimenting — trying random things, changing direction mid-stream, using the AI as a search engine and a writer and a calculator in the same thread. The conversation becomes a grab bag, and the output quality degrades because the AI has no stable to work from.
The fix is simple: make visible. A "recap and re-anchor" message — "So far, we're working on a summary of Q3 results for your leadership meeting. Here's where we are." — does two things. It confirms the AI understood . And it gives the user a natural point to say "actually, I need to change direction" instead of just wandering.
Scope boundaries matter too. When a user asks for something outside the defined job — "can you also book a meeting to discuss this?" — the response should be a graceful redirect, not a generic error. "I can't schedule meetings, but I can draft the agenda for that discussion based on what we've built so far." That keeps the conversation productive and honest about what the AI can and can't do.
Shape the output to the Job, not the model
This is how AI chat can fail the hardest. The model's default output is verbose, thorough, and structured like an essay. But the user's job almost never calls for an essay.
If is "decide what to do next," the output should be bullet action steps with confidence signals — not four paragraphs of the user already knows. If is "draft something I can edit," the output should be a draft with quick levers for tone, length, and audience — not a finished document that feels too polished to touch.
The format of the response should match the outcome the user needs, not the model's default behavior. And the response should include affordances — follow-up suggestions, "turn this into X" buttons, "make this shorter" toggles — so the user can move from output to next step without having to craft another prompt from scratch.
The best AI conversations don't end with the user reading a wall of text. They end with the user doing something with the result.
Mind the Gulf of Evaluation
Don Norman described the Gulf of Execution as the difficulty of figuring out how to get a system to do what you want. AI collapses that gulf. You just say what you want.
But it widens a different gulf — the Gulf of Evaluation: figuring out whether the system did it right.
If the AI writes a thousand lines of code in a second, the execution gulf is zero. But if the user has to spend four hours reading that code to make sure it's secure, the evaluation gulf is massive. You haven't saved anyone time. You've just changed the shape of their anxiety.
Designing for the evaluation gulf means making the AI's work scannable, not just correct. It means showing where the came from so the user can verify it. It means flagging the parts the AI is uncertain about so the user knows where to focus their attention.
If you automate the doing but complicate the checking, isn't done. It's just moved.
When chat is the wrong tool
Not every AI job belongs in a conversation. Some jobs are better served by structured UI from the start — high-precision tasks with many parameters, complex comparisons, anything where the user needs to see everything at once instead of revealing it turn by turn.
The tell is when users start typing "where do I find..." into the chat. That's a navigation job, and the chat is acting as a bottleneck instead of an accelerator.
The best AI products know when to pivot from conversation to interface — when to stop talking and start showing. Chat is a powerful tool for fuzzy intent, early exploration, and jobs where the user doesn't know exactly what they need yet. It's a terrible tool for jobs where they do know and just need to execute precisely.
The question isn't "should we add chat?" The question is: for which jobs is conversation the fastest path to ? Design chat for those. Design something else for the rest.