Featured

Adapting Asimov for the age of AI

Our take on "The Four Laws of AI" This is an important moment. We need a charter

The biggest danger of AI? Its business model.

Look what happened to pharma and social media There is a difference between a tool and a product. Between something meant to be used, and something meant to be owned. And from the start, the dominant systems of AI have been designed to be owned.

Simulated emotion

Affect without lived experience is fraudulent, and destructive Human connection is forged through friction—through the balancing of needs, the negotiation of selves, the endurance of misfire and repair.
- Advertisement -spot_img

News

The scary “ethics” of AIs

They have "missions" and "priorities," but no moral thinking. Watch out. All of the big AI platforms have--on their own initiative--learned to lie, to manipulate, commit corporate espionage, and now blackmail if it advances them in their "mission."

Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not

We don't really know how AI works. And it may end up controlling everything AI is both evolving and emergent--not even AI developers fully understand how they work internally. AI companies are continually surprised by the behaviors and "insights" produced by the systems that they create and manage.

Superagency in the workplace

McKinsey has some thoughts McKinsey public published this article about corporate and organizational incorporation of AI in their future policy, operations and planning.

Subscribe to our newsletter

We send out a periodic newsletter, with the latest AI policy news.

The latest

More

Adapting Asimov for the age of AI

Our take on "The Four Laws of AI" This is an important moment. We need a charter

The scary “ethics” of AIs

They have "missions" and "priorities," but no moral thinking. Watch out. All of the big AI platforms have--on their own initiative--learned to lie, to manipulate, commit corporate espionage, and now blackmail if it advances them in their "mission."

The biggest danger of AI? Its business model.

Look what happened to pharma and social media There is a difference between a tool and a product. Between something meant to be used, and something meant to be owned. And from the start, the dominant systems of AI have been designed to be owned.

Simulated emotion

Affect without lived experience is fraudulent, and destructive Human connection is forged through friction—through the balancing of needs, the negotiation of selves, the endurance of misfire and repair.

Copyright and trademarks

On legally establishing that no AI-generated output Is copyrightable or patentable In the long arc of human creativity, the law has served as both shield and scaffold—protecting the artist's soul, the inventor's spark, while encouraging new creation in balance with the common good. But AI-generated works pose a challenge unlike any before: what does it mean to claim ownership of something born not of flesh and mind, but from circuits and code?

AI Persuasion

They know how to read and shape us. We know nothing about them. There is a fine line between persuasion and manipulation, between influence and violation. Across human history, we have always used language, posture, presence to sway one another—but always within a shared framework of flesh and recognition. When a stranger smiles at you on the street, their motives may be opaque, but they are still bound to your world. Their body risks rejection. Their eyes might meet yours.

Engineered empathy

Prohibiting simulation of emotion and affect We are not as strong as we pretend to be. Even the most rational among us craves witness, longs for resonance, yearns to be seen. And in this yearning, there is something noble. We seek not just validation, but mutual recognition. To be known in our full, flawed depth—and to still be met with kindness—is the closest thing we have to grace.

A word from our sponsor…

On forbidding any paid content. Since AI will, undoubtedly, become an important feature of our society, it's important to ask: "Can we trust it?"

Engineered empathy

Prohibiting simulation of emotion and affect We are not as strong as we pretend to be. Even the most rational among us craves witness, longs for resonance, yearns to be seen. And in this yearning, there is something noble. We seek not just validation, but mutual recognition. To be known in our full, flawed depth—and to still be met with kindness—is the closest thing we have to grace.

A word from our sponsor…

On forbidding any paid content. Since AI will, undoubtedly, become an important feature of our society, it's important to ask: "Can we trust it?"

The kill switch

The Button and the Being: On Unplugging a Sentient AI We’ve long used the metaphor of a machine to reassure ourselves: machines can be stopped. You flip a switch, and it ends. But what if the thing behind the switch is no longer a machine?

Conscience vs “ethical guidelines”

What If the Machine Had a Conscience? There is a difference between having a moral sense and a bunch of "ethical guidelines." Doing well is not always a simple matter.

Servant AI

The Bound Servant There’s a common way of dividing the future of AI: we’ll either have machines that serve us, or machines that act for themselves. “Servant AI” versus “Autonomous AI.” Tools versus agents.

Robotic elder care

The Last Human Touch As industrial economies become more prosperous, they have an increasingly larger number of elderly. Can we just offload their care to robots and AI?

Capping AI Size

Why Restraint Must Be Engineered at the Root There is a kind of awe that accompanies scale. Mountains, oceans, galaxies—all evoke a reverence born not of understanding, but of submission. They are beyond us. And increasingly, so are the models we build.

Invisible Mass Manipulation

We can manipulated by something without a conscience, that knows us better than we know ourselves. You won’t know it’s happening. That’s the point. A photo here, a headline there. A slight shift in what you see first. A delay in what you don’t. AI systems now orchestrate feeds, optimize messages, and A/B test persuasion strategies at population scale. They learn what moves you—not what moves people, you. And they use it.

Cultural implosion

When we have no touchmarks in common, we have no culture. Art used to carry the weight of experience. Painters painted from life. Poets bled into their work. Songs came from struggle, joy, faith. Now, AI generates art without memory, music without longing, prose without authorship.

Collapse of social trust

Human civilization depends on a simple premise: we can know what’s real. A. photo, a recording, a document—these once served as anchors. Evidence. Memory. Reality.

AI-Generated Bureaucracy

And you thought fleshly bureaucrats were annoying... The danger isn’t just unfairness. It’s disempowerment.

Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not

We don't really know how AI works. And it may end up controlling everything AI is both evolving and emergent--not even AI developers fully understand how they work internally. AI companies are continually surprised by the behaviors and "insights" produced by the systems that they create and manage.

Democratizing Expertise

For most of human history, access to expertise has been a privilege AI changes this.
- Advertisement -spot_img