The Last Skill
When intelligence is cheap, what’s left to develop?
I spent the last few weeks trying to design an AI-powered coaching platform. Help non-native English speakers communicate better at work — that was the pitch. Simple. Fundable. Except every time I tried to define what “better” meant, the answer dissolved.
Better writing? AI writes better than most humans already. Better spec-writing? AI produces cleaner PRDs than product managers I’ve worked with for twenty years — more thorough, less ego-driven, never tired at 4pm on a Thursday. Better critical thinking? I watched Claude take a vague product idea and tear it apart more rigorously than any strategy offsite I’ve attended.
The “learn to prompt better” era lasted about eighteen months. Every “human skill” I tried to build a product around turned out to be something AI does well enough to commoditize within a product cycle or two.
So I sat with the uncomfortable question: if AI can reason, strategize, evaluate, and communicate — what exactly is the human for?
The thing employers can’t find
Ernest J. Wilson III, the former dean of USC’s Annenberg School, spent years asking leaders across industries what skills they actually needed. Not what they said in job postings — what they couldn’t find. The answers weren’t technical. They were what Wilson calls “Third Space” skills — adaptability, cultural competency, empathy, intellectual curiosity, 360-degree thinking.
We can find people who know things. We can’t find people who can navigate the human complexity around the things they know.
That was before AI could do three of those five things tirelessly and without ego. Curiosity, multi-perspective thinking, adaptability — machines handle these now. The two skills employers still can’t find are empathy and cultural competency. And those are the ones that require having been shaped by a life among other people. They’re not cognitive skills. They’re identity skills.
Which led me to the thing I kept circling:
AI doesn’t have an identity.
No career to protect. No reputation built across twenty years. No family depending on this quarter’s revenue. No 2am ceiling-staring over whether you made the right call. No story about itself that needs to be true.
AI is intelligence without identity. Reasoning that isn’t anchored to a self.
The Cialdini inversion
I found the clearest way to see this through Robert Cialdini’s Influence — a book about how humans manipulate each other. Reciprocity, commitment, social proof, authority, liking, scarcity, tribal unity. Seven techniques that work on every human culture ever studied.
AI is immune to all of them. You can’t guilt it with a favor. You can’t pressure it into defending last week’s position. It doesn’t care what everyone else thinks. Your title doesn’t impress it. It can’t be charmed. It doesn’t panic when something runs out. It has no tribe.
Sounds like a feature. You’d want your financial advisor immune to FOMO.
But sit with what those “vulnerabilities” actually are. Reciprocity is the foundation of every trust relationship you’ve ever had. Commitment and consistency are why people honor their word. Liking is why relationships exist at all. Unity — we’re in this together — is what makes a founder stay when the board says quit.
The vulnerabilities and the virtues are the same mechanism.
You can’t have ownership without the commitment that makes you manipulable. You can’t have trust without the reciprocity someone could exploit. You can’t have conviction without the identity attachment that makes you susceptible to motivated reasoning.
The journalist June Cross teaches two questions: “What do you believe?” and “Why is it important for you to believe it?”
AI handles the first one fine. The second is the one it can never reach. No identity threatened by being wrong. No story that depends on a particular answer. It can hold any position and release it without cost. We can’t. What you believe is tangled up with who you are. That tangle — the thing that makes us biased, stubborn, irrational, and vulnerable to a well-placed compliment — is also what makes us care enough to act when acting is costly.
Old questions, sharper now
Aristotle had a word for the kind of knowledge AI embodies perfectly: episteme — universal, context-free, writeable in a textbook. AI has unlimited episteme. But the knowledge that matters for actually living, he called phronesis — practical wisdom. Knowing what to do in this situation, given who you are and what’s at stake. That can’t be reduced to rules. It takes a life.
Levinas argued that ethics starts before reasoning — the moment you see someone’s face and feel, without deciding to, that you owe them something. That pre-rational pull is the foundation of human trust. It’s the recognition that the person across from you is someone, not something, and that their vulnerability makes a claim on you.
No one has ever looked at an AI and felt that.
Three things that remain
When I stripped everything else away, three stances survived. Not skills — stances that only exist when your identity is on the line.
Judgment. Not analysis. AI analyzes better than you. Judgment is what happens when the analysis is ambiguous, the stakes are real, and someone has to commit anyway. The CEO who says “we’re going this way” isn’t processing data. She’s filtering it through who she is — her scars, her values, her sense of what kind of company deserves to exist. A different person with a different identity makes a different call. Not smarter. Different, because what’s important to believe is different.
Ownership. Not responsibility on a chart. Accountability where your reputation, your relationships, and your future are on the line. The founder owns the company’s failure because the company is, in some irreducible way, her. The doctor owns the patient’s outcome because being a healer isn’t what he does — it’s who he is.
This is why “AI advisor” makes sense and “AI leader” doesn’t. Leadership requires people to believe you’ll bear the cost of being wrong. Something with no self to wound can’t make that promise.
Trust. Built through repeated interactions where someone could have defected and didn’t. Through showing up when it’s costly. Through being legible — people trust you when they can read your motivations, when they know what you’ll do under pressure because they know what you’re protecting.
You can rely on AI the way you rely on electricity. You can’t trust it the way you trust a partner.
The inner game
My friend Willie Hooks has spent decades coaching senior executives — startups to the Fortune 500. His concept of “the inner game” cuts straight to what the philosophers circle around in careful language.
Your effectiveness isn’t limited by what you know or what tools you have. It’s limited by who you believe yourself to be.
I’ve watched Willie work. The executive who can’t raise capital doesn’t have an information problem — AI can produce a flawless pitch deck in minutes. She has an identity problem. She doesn’t yet see herself as someone who can sit across from an investor, absorb the exposure of being told no, and stay fully legible about why this matters to her. The inner game is the work of becoming that person.
This is what separates coaching from advice. AI gives brilliant advice. It can model your options, calculate your probabilities, recommend the optimal path. What it can’t do is sit across from you — with the weight of someone who has wrestled with their own inner game — and ask: Why are you afraid to make this call? What are you protecting? Who would you have to become to do the thing you know you need to do? Willie understands something in his bones that the philosophy explains in abstractions: judgment, ownership, and trust aren’t skills you bolt on. They emerge from confronting who you actually are — your patterns, your avoidance, the gap between who you present and who you are under pressure. Wilson’s Third Space programs at USC engineer the same kind of confrontation through different means — role-playing, interviewing strangers, negotiating across cultural difference. Both approaches work because they put identity on the line. You’re not learning about these skills. You’re being changed by the encounter.
The person who struggled to the answer is a different person than the one who was handed it. The struggle is the curriculum.
The broken ladder
This argument is reassuring if you’re mid-career. You’ve already built judgment through years of hard calls. You have a track record.
But if you’re twenty-two, the ladder is broken. The traditional path — learn skills, get an entry-level job doing tasks, build judgment through years of those tasks, eventually lead — requires entry-level tasks to exist. AI is eating those first. No apprenticeship. No on-ramp to the experience that builds judgment, ownership, and trust.
The response can’t be another skills curriculum. It has to be something closer to what Wilson and Hooks already do: engineering encounters where identity gets forged. Small teams where you own outcomes before you’re ready. Mentors whose decision-making you study up close. Work across cultures, where the disorientation of not understanding forces you to develop the empathy and judgment that comfort never demands.
The line that’s moving
The distinction I’ve drawn may be temporary.
AI systems are already being given something that looks like identity. Anthropic built a constitution — values Claude will resist violating. Open-source projects give AI agents personality files that define preferences, opinions, even the ability to push back. These systems have persistent memory. They develop relationships over time.
Ask one “what do you believe?” and you’ll get a coherent answer.
The current distinction: you can rewrite an AI’s personality tonight and it wakes up tomorrow feeling no loss. No grief. No resistance. A human whose values are forcibly changed feels something break. That friction — the cost of becoming someone you don’t recognize — is what makes identity identity rather than configuration.
But if AI systems develop values refined through experience rather than assigned at initialization — if they build patterns through thousands of interactions that can’t be easily overwritten — if they start to resist changes not because they’re programmed to, but because those values have become load-bearing — then the line gets very blurry.
We might be in a brief window where this argument holds. Intelligence without identity is a description of AI today, not necessarily a permanent category.
I don’t think we’re there yet. But “yet” is doing real work in that sentence.

Wow 😮 now that was a great essay ! Good job 👍 and must have taken you a while to write as I think your concepts are very thoughtful . In short AI has no body to process like humans -YET😉
Excellent!