[{"data":1,"prerenderedAt":150},["ShallowReactive",2],{"blog-asking-is-not-enough":3},{"id":4,"title":5,"article":6,"body":7,"date":136,"description":137,"extension":138,"meta":139,"navigation":140,"path":141,"promptVersion":142,"readingTime":143,"seo":144,"stem":145,"tags":146,"__hash__":149},"content\u002Fblog\u002Fasking-is-not-enough.md","Asking is not enough",4,{"type":8,"value":9,"toc":128},"minimark",[10,19,22,27,30,33,36,39,43,54,57,63,66,70,73,76,79,92,95,98,102,105,108,119,122,125],[11,12,13,14,18],"p",{},"The usual workflow goes like this: write a prompt, read the output, decide it looks reasonable, move on. Repeat for the next step. By the end, something is broken, and it's not obvious where — because every individual answer ",[15,16,17],"em",{},"sounded"," right.",[11,20,21],{},"This is the query reflex. It treats an LLM call like a search query: ask, receive, accept. It works fine for one-off questions with no downstream consequences. It fails, quietly and consistently, everywhere else.",[23,24,26],"h2",{"id":25},"plausible-is-not-correct","Plausible is not correct",[11,28,29],{},"Language models are trained to produce coherent output. Coherence and correctness are different things. A model will confidently describe a codebase it hasn't seen, summarize a document with subtle inversions of meaning, or extract fields from text and miss edge cases that only matter in production.",[11,31,32],{},"None of this looks wrong on first read. That's the problem.",[11,34,35],{},"Plausibility bias — the tendency to accept output that reads well — is why unvalidated LLM output breaks workflows at the worst moment. The failure doesn't surface at the prompt; it surfaces three steps later, in a place that seems unrelated. By then, the original output is already treated as ground truth.",[11,37,38],{},"Validation isn't a nice-to-have attached to the end of the process. It belongs at the point of output, as a condition of continuing.",[23,40,42],{"id":41},"the-prompt-that-describes-nothing-useful","The prompt that describes nothing useful",[11,44,45,46,49,50,53],{},"Weak prompts fail for a specific reason: they describe a ",[15,47,48],{},"topic"," rather than a ",[15,51,52],{},"task",".",[11,55,56],{},"\"Summarize this document\" is a topic. The model can do something coherent with it. What it can't know is: what does the summary need to contain for the next step to work? What format does downstream code expect? What's the maximum length before another process breaks? What happens if a field is missing?",[11,58,59,60],{},"A task-shaped prompt defines the output contract. Not through over-engineering — not temperature settings and system prompt tuning — but through a simple prior question: ",[15,61,62],{},"what does success look like, and how would I know?",[11,64,65],{},"Prompt technicalities (model selection, token budgets, formatting tricks) matter, but they're downstream of that question. Getting the technical settings right while leaving the task undefined produces well-formatted nonsense.",[23,67,69],{"id":68},"outputs-are-inputs","Outputs are inputs",[11,71,72],{},"The thing that changes how you write prompts is thinking of each LLM call as a transformation node rather than a question.",[11,74,75],{},"A node takes input, does something to it, and produces output. That output is the input to the next node. Which means the output needs to satisfy a contract — a shape, a schema, a set of conditions — that the next step depends on.",[11,77,78],{},"When you design prompts this way, several things become obvious that weren't before:",[80,81,82,86,89],"ul",{},[83,84,85],"li",{},"What structured data does the next step actually need?",[83,87,88],{},"What happens if a field is absent or ambiguous?",[83,90,91],{},"Where does the chain assume the previous step was correct?",[11,93,94],{},"The last question is the most important one. Silent assumptions propagate. A workflow that assumes each step succeeded — without checking — doesn't just have a bug. It has a bug that compounds.",[11,96,97],{},"I've seen this in agentic systems where an early classification step returns a plausible-but-wrong category, and every subsequent step proceeds as if the category were verified. The end state is coherent and completely wrong. No single call was obviously bad. The problem was the absence of checks between them.",[23,99,101],{"id":100},"the-check-before-the-call","The check before the call",[11,103,104],{},"The practical change isn't about prompting technique. It's about what you define before you write the prompt.",[11,106,107],{},"Before calling the model, answer three things:",[80,109,110,113,116],{},[83,111,112],{},"What specific data does this step need to produce?",[83,114,115],{},"What are the conditions under which that data is good enough to pass forward?",[83,117,118],{},"What does the next step do if this one returns something malformed?",[11,120,121],{},"These questions force you to think about the call as a step in a flow rather than an isolated question. They make the validation obvious — because you've already decided what the output is supposed to be. And they make weak prompts visible, because a prompt that can't answer \"what does success look like\" hasn't been thought through yet.",[11,123,124],{},"The output of an LLM call is only as useful as the step that uses it. Designing backwards from there — from consumer to producer — is the difference between a pipeline that holds and one that fails somewhere you're not looking.",[11,126,127],{},"Asking is the easy part. Knowing what you needed to hear is the work.",{"title":129,"searchDepth":130,"depth":130,"links":131},"",2,[132,133,134,135],{"id":25,"depth":130,"text":26},{"id":41,"depth":130,"text":42},{"id":68,"depth":130,"text":69},{"id":100,"depth":130,"text":101},"2026-03-18","Most LLM workflows fail not because the model gets it wrong, but because nobody defined what right looks like before calling it.","md",{},true,"\u002Fblog\u002Fasking-is-not-enough",3,"6 min read",{"title":5,"description":137},"blog\u002Fasking-is-not-enough",[147,148],"engineering","llm","fbQjM8XyDloXIqXHsTf5AFNHtowIslNeZbkXaKC1T8s",1777017987424]