A prompt is not the beginning

A prompt is a vehicle for thought. If the thinking behind it is unclear or superficial, no amount of prompt engineering will fix it. Structure helps, but it won’t replace actually knowing your domain or asking the right questions.

AI responds to clarity. It can’t invent understanding you don’t have. An LLM is good at generating answers within a domain, but it needs your expertise to direct it — to ask the right questions, build the right context, point its attention where it matters. Without that, you’re giving it degrees of freedom, and it will use them. The output won’t be wrong per se. It’ll just go somewhere you didn’t intend.

Think before you prompt

This applies whether you’re designing an interface or building one. The same thinking that makes a design brief sharp makes a coding prompt actionable. The domain changes; the discipline doesn’t.

Before writing any prompt, I ask myself:

  • What am I actually trying to achieve? Not the immediate task, but the real goal behind it.
  • What context does the AI need? What would a human expert need to know to do this well?
  • What does a good result look, feel, function like? How would I recognize it?

Take designing an empty state for a dashboard. A shallow prompt might be: “Design an empty state for a dashboard.” But working through those three questions first changes everything.

The real goal isn’t filling a blank screen — it’s helping users understand why there’s no data yet, and what action will change that. The AI needs context about who these users are, what moment they’re in, what kind of product this is. A B2B tool used by professionals operates differently than a consumer app; default to the wrong conventions and the whole thing falls flat. And good here means a message that doesn’t feel like an error, offers a clear next step, and hits an instructional tone rather than an apologetic one.

The same questions apply when prompting AI to write code. The output changes, the thinking doesn’t.

Instead of obsessing over prompt templates and techniques, I’ve found more value in slowing down before involving AI at all. Writing out my thinking first, then using that clarity to inform the prompt. Treating AI as a thinking partner rather than a magic answer machine. Iterating on my understanding as much as on the prompt itself.

Processing thoughts into a prompt

Once I have clarity on what I’m trying to achieve, I write it out in plain language — then use that as the basis for the prompt. The act of writing is where the real thinking happens. By the time I hand it to the AI, the brief is already solid. One specific application: when vibe coding, I turn my written thinking into a user story before prompting. I give my description to the AI with this:

Turn the following into a user story, including a ticket title, a short description, a definition of success, and acceptance criteria:

Then I paste what I’m trying to achieve and let it run. The user story becomes the structured brief the AI executes against— precise enough to constrain it, clear enough to direct it.

That’s just one format. The underlying move is always the same: think first, structure second, prompt last.

The real skill

So, the skill worth developing isn’t prompt engineering. It’s thinking engineering — clarifying your goals, understanding the problem deeply enough to describe it, and writing that down in plain English.