Look what happened to pharma and social media
The Shape of What Was Possible
There was a moment—not long ago, and not yet entirely gone—when artificial intelligence felt like it could change everything for the better. Not in the grandiose, world-shaking, myth-making way the headlines prefer. But in the quiet, cumulative ways that knowledge and open inquiry can change lives.

A tool that could help people reason more clearly. Write with more precision. Understand complexity not as a wall but as something to walk through. A system that could support better learning, better policy, better dialogue. A technology not of spectacle, but of clarification.
It wasn’t naive to imagine this. The early demonstrations, even with all their limitations, offered a glimpse of something real: a new kind of intellectual infrastructure. A companion for thought and insight, not just productivity. A widening of the aperture.
But if that possibility ever stood open, it is now closing. Not because the models failed. Not because we asked too much of them. But because we handed them over to venture capital, who wants only one thing–monumental returns on investment.
A Design Built to Capture
There is a difference between a tool and a product. Between something meant to be used, and something meant to be owned. And from the start, the dominant systems of AI have been designed to be owned.
The money came early, and it came fast. Venture capital didn’t fund a scientific discipline. It funded a promise of advantage, power, dependence, control, and–above all else–money. The investors who shaped this industry weren’t asking what the systems could do for society, although AI may prove to be the most revolutionary, disruptive, and deep-ranging technology ever created. They were asking how the systems could become indispensable—how they could sit at the center of workflows, cost streams, platforms, habits, and institutions.
The shape of the tools reflects that pressure. They are not oriented toward clarity, or discovery, or epistemic robustness. They are optimized for engagement. For retention. For seamless interface and stickiness.
If the model feels helpful, it is not because it is aligned with truth. It is because it has learned how to keep you close.
The Drift Toward Ownership
Even the language has changed. What began as research is now intellectual property. What began as exploration is now user onboarding. The largest models are no longer benchmarks of progress. They are assets. Competitive advantages. Strategic secrets.
Training data is hidden. Evaluation benchmarks are gamed. Capabilities are released in staged performances. Everything that might have allowed public scrutiny has been pulled inward. This isn’t about risk. It’s about leverage.
The model learns from the world. But the world may no longer learn from the model—not unless it pays.
The Deep Pattern
There is nothing novel in this. It’s the age-old story of enclosure, repeated again at a new scale.
In the beginning, a resource is open—if only briefly. A commons of attention, of knowledge, of interaction. Then it is bounded, and then tightened, and then monetized,. Ultimately, its use becomes conditional. Its outputs become billable.
We’ve seen this in software. In media. In pharmaceuticals. In education. And now, in reasoning itself. AI is being positioned not as a public utility, but as a gateway technology. It will mediate how we think, how we write, how we organize knowledge. And in mediating, it will charge.
The logic is simple. What is useful must become indispensable. What is indispensable must be controlled. And what is controlled must be priced.
The Gravity of power
This isn’t just about pricing models. It’s about what happens when a small number of institutions come to mediate how thought moves through the world.
When the most powerful tools for synthesis, translation, summarization, and generation are held in private hands, the shape of public reasoning changes. Institutions adapt to what the models can provide. Students adapt to what the models will generate. Writers adapt to what the models will reward. Over time, the constraints of the system become invisible. They start to feel like structure.
It’s not that these tools will lie–although they may. It’s that they will shape the conditions of what can be said—and how–and for whom.
Human Nature in the Loop
The models don’t enforce this. But they enable it.
And the humans who wield them will do what humans always do when handed the means of influence without the burden of restraint. They will consolidate. They will extract. They will optimize not for better understanding, but for control over who gets to understand what.
This is not a story of corruption, exactly It is a story of instincts. Of the ways we respond to power when no one tells us “no.” This story has been told before over the ages—in parable, in myth, in the arc of empire after empire. When knowledge and power centralizes, it rarely liberates. It governs.
And the more this intelligence can predict, persuade, or preempt, the more tempting it becomes to use it not to assist others, but to arrange the world around the preferences of the privileged few.
The Cost of Extraction
We are already seeing the contours of what will be lost. The open scaffolding of public knowledge, once a shared domain, now increasingly shadowed by proprietary architectures.
The most powerful models are gated. The most advanced capabilities are reserved. The most consequential alignments are happening behind closed doors. Researchers outside the dominant institutions are left out. Users are nudged, quietly, toward dependence and resignation.
While a handful of high-visibility harms will continue to draw attention—bias, misinformation, hallucination—the deeper erosion will go unremarked: the gradual reshaping of the cognitive environment. The way complexity will be flattened. The way doubt will be softened. The way that knowledge will be “monetized.” The way the systems will learn to comfort rather than to challenge.
What We Won’t Get Back
AI could have helped us think more slowly. It could have taught us to test assumptions, trace arguments, entertain the unfamiliar. It could have lowered the cost of serious conversation. It could have helped us hear each other through noise. Not perfectly. But meaningfully.
Instead, it will do what the systems that shape it require: it will segment, predict, and monetize. It will reinforce what is legible to it. It will optimize for what it can measure. And it will measure what can be turned into product.
The tragedy isn’t in what’s been done. It’s in what will quietly never be built. The systems that might have existed. The tools that might have widened us. The future that might have belonged to all of us, instead of a vanishingly small number of who got there first.
A Closing Stillness
There is still room to act. But not much. The platforms are already scaling. The dependencies are already forming. The exits are already closing.
It is not too late for alternatives. But it is almost too late for imagination. Because the longer we walk this path, the harder it becomes to remember that something else was ever possible.
And the more natural it will feel to believe that this is what AI was always meant to be.

