"Ask Me Anything" Is the Worst First Turn You Can Design
Here's what most AI products show a new user on their first interaction: A blank text field. A blinking cursor. And some version of "Hi, I'm your AI assistant. ."
That's not an appealing invitation, it’s a problem. A blank page problem, to be exact. The user showed up because they need to make on a Job, but now they have a second job first: figure out what to type. They have to translate their messy, half-formed intent into a prompt the system can act on — and they have to do it with no about what the AI is good at, what it's bad at, or what kind of input produces a useful output.
Most users respond to this in one of three ways. They type something vague and get something generic back. They type something overly specific and get something narrow and weird. Or they stare at the cursor for a few seconds and close the tab.
The first one to three turns of an AI conversation determine everything. They determine whether the system understands the user's real job or just their first fuzzy query. They determine whether the user feels supported or abandoned. And they determine whether the conversation produces something useful or devolves into a prompt-guessing game.
Do what you're hired to do
The fix for the blank page problem is to stop asking users to invent their own starting point and start making it clear what this product helps them make.
Your product was hired for a job. The user knows they need to make , but they don't know how to get started. So show them. If your product is hired to help people turn messy inputs into clear outputs, the first screen should reflect that: "Summarize a document." "Draft a response." "Turn messy notes into a plan." Each one is a way into the same job — not a menu of different jobs.
That framing does two things at once. It tells the user "this is what I do," and it sets the trajectory for the conversation. The difference between a good first turn and a bad one is whether the user has to take on the of doing interpretive work. A bad first turn says "" and hopes the user figures it out. A good first turn makes it obvious what the product is for and gives the user an easy way to start.
Speak their language from the first sentence
Users arrive with a mental model of the they're trying to make. They might not know your product's scope or limitations, but they know what they're struggling with and what "better" looks like. The first turn should reflect that back to them — not in your product's language, but in theirs.
"Turn messy notes into a clear plan." That matches how someone already thinks about . The user recognizes their own struggle in that sentence. They know what to bring and what to expect.
Compare that to "I'm a powerful AI assistant that can help with a wide range of tasks." That tells the user nothing about the they can make. It's a category label, not a reflection they came to do.
The best first-turn framing references the the user is already trying to make — "get unstuck," "prepare for a meeting," "clean up messy " — not features or capabilities.
Gather constraints without becoming a tedious form
Constraints shape the output. A draft email to a colleague is a completely different output than a draft email to a client's CEO, even if the content is identical. The AI needs to know the difference. But it doesn't need to interrogate the user to find out.
Most AI products overcorrect here. They turn constraint-gathering into a form. "What's the audience?" "What's the tone?" "What's the format?" "What's the length?" Four questions in a row before the AI has generated anything. That's a checkout flow, not a conversation.
The better pattern is progressive questioning: infer what you can from , and only ask when ambiguity would materially change the result. If the user says "draft a reply to this angry customer," the system already knows the audience (customer), can suggest a professional-but-empathetic tone, and should just generate a draft. If the sentiment is genuinely unclear, ask one clarifying question. Not four.
Every unnecessary question is friction. And friction in the first three turns is where AI conversations go to die.
Reflect the job back before generating
This is one of the highest-trust moves you can make in a first interaction.
Before the AI generates its output, it summarizes what it understood: "Got it — you want a concise, professional response to an upset customer about delayed shipping. I'll keep it empathetic but direct."
That reflection does three things. It confirms the AI understood , which builds confidence. It surfaces misunderstandings before the AI wastes a turn generating the wrong thing. And it gives the user a natural point to correct course — "actually, make it more apologetic" — without having to evaluate a full output first.
The alternative — jumping straight from the user's input to a wall of generated text — forces the user to evaluate correctness and intent simultaneously. Was this what I asked for? Is it any good? Those are two different questions, and asking the user to answer both at once increases at the exact moment you're trying to reduce it.
Reflecting back takes one sentence and one extra second. The trust it builds is worth far more than the speed it costs.
The contrast
Bad first turn: "Hi, I'm your AI assistant. ." User types something vague. AI responds with something generic. User tries again. AI responds with something slightly less generic. Three turns wasted on prompt negotiation. User leaves.
Good first turn: "I help you turn messy inputs into clear outputs. What are you working on?" User types "I need to clean up my notes from today's meeting." AI asks one targeted question — "Who's this for?" User says "my team." AI reflects it back: "Got it — a clean summary of today's meeting for your team. I'll keep it concise." User confirms. AI generates something useful on the first real attempt.
The second version does the interpretive work for the user instead of demanding they do it themselves. It meets them where they are — fuzzy intent, limited patience, a job they need done — and turns that into a clear, actionable job the system can execute against.
That's the difference between an AI product people try once and an AI product people actually use.