
That sounds confrontational. Maybe even a little rude. But I mean it literally, and I think it matters.
For over thirty years, I've been teaching people to work with computers. And for most of those years, whenever a student asked "why did the computer do that?"—I gave the same answer: "Because you told it to."
It’s not a dismissal. It’s a debugging philosophy. The computer isn’t capricious. It isn’t confused. It executed exactly what you specified. If the output surprised you, the gap is between what you meant and what you said. Trace the causality back to your own input, and you’ll find the problem.
This principle hasn’t changed with AI. It’s just operating at a different level of abstraction—and that’s where people get lost.
When we say an LLM “hallucinated,” we’re smuggling in a whole folk psychology. The word implies the model perceived something that isn’t there—as if it had a reality to check against and failed to do so. As if it meant to be accurate, tried to reference truth, and somehow slipped.
None of that is true. There’s no intention. No reference. No failure in the mechanical sense.
What’s actually happening is simpler and more important: the probability landscape shaped by your input didn’t sufficiently constrain the output toward what you wanted. The model navigated the space you created. If that space was ambiguous—if your prompt contained multiple valid interpretations—the model picked a path. Maybe not the path you intended.
But here's the thing: *you built that landscape*.
This is where I’m tempted to reach for words like “ontological precision” and “semantic disambiguation”—and I can already feel some of you edging toward the exit. Fair enough. I once used “ontological” in a coffee shop and cleared a three-table radius.
But these concepts matter, so let me earn them.
Ontological precision is knowing what kind of thing you’re talking about. Not just the word, but its nature. Is this an object or an action? A property or a relation? A specific instance or a category? When you say “element” in a prompt, do you mean a chemical element, an HTML element, an element of a set, or an element of style? The word is identical. The things are not.
Semantic disambiguation is resolving which meaning applies in this context. Our language is flooded with overloaded terms—especially in technical domains. Object. Type. Class. Token. Instance. Component. Each carries decades of conflicting usage across paradigms and programming languages. When you use them without disambiguation, you’re broadcasting on multiple frequencies and wondering why the signal is noisy.
The LLM isn’t confused by this ambiguity. It’s doing exactly what it’s designed to do: generate probable continuations given the input. If your input supports multiple interpretations, the output will reflect that uncertainty. Garbage in, garbage out—but the garbage is semantic, not syntactic.
It gets worse. And this part took me a while to see clearly.
I keep having this recurring experience—call it a debugging dream. I ask for something simple. In my mind, it’s obvious—a straightforward lookup across a set of elements. But I don’t say it precisely. The response comes back wrong. So I push harder, more words, more emphasis. The results get worse. The model seems to dig in, increasingly committed to the unhelpful direction.
Then I stop. I rewrite the prompt. I add specificity. I resolve the ambiguity I hadn’t realized I’d introduced.
Perfect results.
What’s happening here is mechanical, not mystical: each turn in a conversation becomes part of the context window. If my first prompt introduced ambiguity, the model’s response to that ambiguity now becomes weighted evidence for what we’re “talking about.” The next response samples from a distribution that’s been shifted. Imprecision compounds. Context isn’t just memory—it’s momentum.
Pushing harder without disambiguating just adds more noise to a trajectory that’s already drifting. The model isn’t being stubborn. It’s following the gradient I established.
Here’s where the old lesson and the new reality converge.
“Because you told it to” was always about demystification. Don’t attribute agency to the machine. Don’t treat unexpected behavior as mysterious. Trace it back to your input.
The same principle applies to LLMs—we just have to trace it through probability space instead of execution logic. The people who get consistently good results from AI aren’t incanting magic prompts. They’re not “AI whisperers” with mystical attunement. They’re people who think clearly—or who use the interaction to discover where their thinking is fuzzy.
This is what I’ve come to believe: working with AI is a form of cognitive hygiene. The model is a mirror. When the output is muddled, it’s often because the input was muddled—and we didn’t notice until we saw our own ambiguity reflected back at us.
The vibe coders who struggle aren’t lacking some special technique. They’re speaking in high-entropy language and expecting low-entropy results. The math doesn’t work.
So no, AI doesn't hallucinate.
It responds—precisely, statistically, amorally—to the space you give it. If that space is well-defined, the output tends to be useful. If that space is ambiguous, the output wanders.
The question isn’t “why did the AI do that?” The question is the one I’ve been asking for thirty years: “What did you actually tell it to do?”