The "Human-Centric AI" Lie, and the End of Homo Sapiens

I’ve spent the last year or so telling anyone who would listen that the future of AI is "human-centric." I put it in decks. I said it on podcasts. I meant it every time.

I never once defined what I meant by "human."

I'm not alone. About half the strategy decks in the world right now (and roughly 94% of LinkedIn posts) has this phrase somewhere near the front. It tests well. It feels responsible. It signals that you're not one of the reckless ones trying to replace your workforce with a server farm.

But ask any leader what "human-centric" actually means and they'll say something about keeping humans "in the loop." Which really means: we're using AI to make our people type faster, and we haven't fired them yet. We are treating a civilization-shaping technology like a really fast calculator, measuring humans against machines on the machine's terms.

We will lose that race. Forever.

Because here's what I've realized, slowly and uncomfortably: "human-centric AI" is a pacifier.
A phrase designed to let us avoid the one question that would give it teeth.

Until you can define what the human contribution actually is when the machine is smarter than you, you're centering nothing. You're decorating an automation strategy with a values statement.

So let me try to do the thing I should have done three years ago.

Let me actually define what "human" means in this context. To do that, I need to go back about five hundred years.

The Rupture

Every few hundred years, a technology comes along that doesn't just change how we work. It changes what we are. And the way it does this is always the same: it breaks a monopoly that everyone thought was permanent, and forces an entire civilization to answer a question it had been avoiding.

Before Gutenberg, knowledge was a monopoly owned by the Church. Monks copied Bibles. Priests interpreted them. There was no way for an ordinary person to check whether the interpretation was right, because there was no way for an ordinary person to read the text.

The printing press changed one thing: it made reproducing text near-zero cost. That's it. One economic shift. But what followed from that single shift took a century to unfold and broke the world open.

Cheap text meant books could spread. Books spreading meant literacy could spread beyond clergy and aristocracy. And once ordinary people could read, something happened that the Church never anticipated: people started checking. They could read the Bible themselves. They could compare what the priest said to what the text said. For the first time in history, you could verify a claim without relying on authority.

That's what killed the Church's monopoly. Not atheism. Not rebellion. Verification.

And once the monopoly on truth collapsed, a new problem emerged: if the priest doesn't get to decide what's true anymore, who does? You needed a method for determining truth that didn't depend on authority. That's the scientific method. You needed an institution to house that method. That's the university. And the civilization that emerged from open inquiry and systematic doubt, where humans defined themselves as rational beings whose purpose was to figure the world out? That's the Enlightenment.

We redesigned our entire definition of the species.

We became Homo Sapiens. Man the Knower.

And we built everything, schools, corporations, career ladders, legal systems, around that definition.

None of it was inevitable. Each step simply enabled the next. And the reason I'm walking through this chain so carefully is that AI is triggering the exact same sequence. Same structure. Different monopoly.

The Death of "How"

The printing press broke the monopoly on information.
AI is breaking the monopoly on execution.

AI makes synthetic cognition cheap. Pattern recognition, data analysis, coding, legal drafting, financial modeling. The sheer cognitive overhead of getting things done is collapsing toward zero. Think about how you actually spend your day. Your brain devotes roughly 90% of its energy to the how: how to phrase this email so it lands right, how to structure this model so the board can follow, how to write this code so it doesn't break. Maybe 10% goes to the what and the why. The actual intention behind the work.

AI absorbs the how. It acts as a cognitive exoskeleton that handles the mental execution while you handle the direction.

Now here's why this is a rupture and not just an upgrade. For three hundred years, we have defined peak human value as cognitive labor. Knowing things. Analyzing things. Figuring things out. That was the Enlightenment's answer to "what are humans for?", and it served us well. We built entire civilizations around the idea that the smartest person in the room is the most valuable.

When knowing becomes cheap, that definition stops working…. well then Man the Knower has an identity crisis.

And when I said "human-centric AI" for three years without defining it, what I was unconsciously trying to protect was this old definition. I was saying: let's keep humans at the center of cognitive labor. Let's augment their thinking. Let's make them faster and smarter.

But faster and smarter at what? On what terms? Toward what end? I never asked. Because asking would have forced me to confront the possibility that the entire frame, Man the Knower, was the thing being made obsolete.

You Can't Download Scar Tissue

So if AI absorbs the how, humans are left with the what and the why. That sounds like a promotion. It isn't. It's the hardest job there is.

Choosing what's worth doing, and why, with incomplete information and irreversible consequences, that requires something AI cannot provide. Not intelligence. Not speed. Wisdom.

Wisdom is not intelligence. Intelligence is processing power. Wisdom is metabolized experience. It's the judgment that comes from having been through the fire and carrying what it taught you in your body, not just your mind. The capacity to hold ambiguity long enough for a real answer to emerge instead of grabbing the first adequate one. To sit in uncomfortable silence when everyone in the room wants you to fill it with certainty.

The difference between a brilliant 22-year-old and a 50-year-old veteran isn't what they know. It's the scar tissue on their nervous system.

AI can compress the path to competence. A junior analyst can now produce senior-level output in an afternoon. But AI cannot compress the path to wisdom. Wisdom requires genuine exposure to real stakes, real failure, real consequences that rewire how you see the world. You cannot prompt-engineer moral courage. You cannot download scar tissue.

The human contribution I should have been defining all along isn't productivity or knowledge or even creativity. It is the capacity to choose wisely. To be transformed by what you go through. To decide what's worth doing when the machines can do everything else.

Homo Intentionis

That's not a minor software update to the human operating system. It's a species-level transition.

For three hundred years, we organized civilization around Man the Knower. Schools that reward the right answer. Corporations that promote whoever knows the most. Performance systems that measure output. Every institution optimized to produce and reward cognitive labor. And now the thing those institutions were designed to develop, knowing and executing, is exactly the thing being commoditized.

If the scarce human contribution is no longer knowing but choosing wisely, then we need institutions designed for a different species entirely. Not Homo Sapiens, Man the Knower. Homo Intentionis. Man the Chooser.

The Enlightenment understood this. It didn't just change what people knew. It changed what people were for, and then it built entirely new institutions around that new definition: universities, democracies, scientific academies, human rights frameworks. The architecture matched the definition of the species.

We need the same thing now, and it won't look like a faster university. It will look like organizations designed to compress the path to wisdom: getting people to the right struggles faster rather than eliminating struggle, so that a twenty-six-year-old develops the judgment of a forty-year-old because the machine freed a decade of cognitive busywork for actual human development.

The Default and the Design

Here is where it stops being philosophical and becomes urgent. Because these institutions are being designed right now. Just not intentionally.

Every decision about whether AI-freed capacity gets reinvested in human development or pocketed as margin is a design decision about what humans are for. Every entry-level role eliminated without redesigning how the next generation develops is a design decision. Every AI deployment that makes people faster without making them wiser. These are all sentences in an answer that most leaders don't realize they're writing.

The answer being built by default, in most organizations right now, is: humans are an increasingly expensive and unreliable component to be managed around. Nobody intended that answer. But when you optimize for productivity without ever defining what you're centering, that is what the architecture produces.

I know, because I was doing it too. Saying "human-centric" while unconsciously centering productivity. I just realized when sitting writing this article that I have been promoting an idea I had never actually thought through.

The printing press took a century to produce the Enlightenment. AI is forcing the same question on a faster clock. The friction of execution is ending. The burden of choosing wisely is just beginning.

"Human-centric AI" could mean something extraordinary. But only when we stop using it as a comfort blanket and start using it as what it should have been from the start: a design specification for institutions that develop wise humans at a scale the Enlightenment could never have imagined.

What are humans for, now that machines can think?

If the answer is Homo Intentionis, Man the Chooser, then the follow-up question is immediate and personal: what intentions are you bringing to the table? Because in a world where execution is free, that is how your value will be measured. Not by what you can do. By what you choose to do, and why.

That question is coming for all of us. Sooner than we think.

Next
Next

There Is No Arrival