How to build deep analysis one controlled layer at a time
We’re used to thinking of analysis as something linear: you pose a question, crunch some numbers, and get an answer. But when you’re working with AI, that model breaks.
An assertion without an assessment is just an opinion
An AI will always try to answer your question (as it understands it). When it replies, its answer "feels" confident: a flat, bold assertion that yes, this is true, or option B is "likely." But how useful are these assertions?
Hardly at all.
What you see depends on where you're looking from
Remember the old fable of the blind men and the elephant? There are ways of looking at a thing that can somehow miss its essential nature. The best protection against this is looking from multiple angles.
Make assumptions explicit, and then refine them
AI models don’t “think” in the human sense. But they do embed assumptions—lots and lots of them. Hidden priors, structural defaults, causal heuristics, statistical norms. Often, when you ask the AI a question and it responds with confidence, what it’s really doing is applying a stack of unspoken assumptions to your input.
How to build deep analysis one controlled layer at a time
We’re used to thinking of analysis as something linear: you pose a question, crunch some numbers, and get an answer. But when you’re working with AI, that model breaks.