What you see depends on where you’re looking from
Remember the old fable of the blind men and the elephant? There are ways of looking at a thing that can somehow miss its essential nature. The best protection against this is looking from multiple angles.
“Frame this hiring policy problem from a game-theoretic perspective.”
“Convert this workflow bottleneck into a constraint satisfaction problem.”
“Re-express this market failure as a coordination dilemma.”

Why This Matters
These aren’t vertical changes—they’re lateral, disciplinary shifts. They re-encode the problem space in a new grammar and conceptual framework.
Most people talk to AI like it’s one-dimensional: “Give me the answer.”
But real analysis requires controlling the lens and perspective through which a problem is seen: closer, further, a little to the left. Zoom out too far, and you get generalities. Zoom in too close, and you miss structure and relationships. But shift laterally, and you might just unlock a new solution that was invisible from the original frame.
So develop this prompt reflex:
- “Now reframe this one level up/down.”
- “Now restate this as if you were a…”
- “Explain this in institutional vs. individual vs. systemic terms.”
You’re not just asking a better question. You’re setting the altitude (and attitude) of thought. By shifting between perspectives, a whole lot of depth and structure is revealed that leads to a fuller, richer, deeper analysis.
Do all this exploration in a opaque, volatile thread: you might hit a lot of blind alleys, or stumble into framings that change the nature of your analysis: you’d like to keep all of this out of your primary analytical space until you’ve decided what to do with it–otherwise, the AI can be so corrupted by conflicting and overlapping assumptions and views that it will become muddled and unuseful. In the tips section, we discuss how to do this.

