The Chrysalis We Forgot We Were
In my last piece, I argued that we're in the wrong race - that competing with AI on skills, speed, and efficiency is a contest we were never meant to win. The response I imagine from administrators is fair: "Okay, but what do we actually do?"
Let me try to answer that. But first, I need to name what's actually happening in the rooms where these decisions get made.
Administrators are scared. Not of AI specifically - AI is just the latest threat. Before this it was MOOCs, corporate universities, the endless "is college worth it?" debate. The fear is older and deeper than any single technology. It's the creeping suspicion that we don't know what we're for anymore. Business schools especially have watched every value proposition get stripped away. We said we were for jobs, and corporations started training their own people. We said we were for skills, and AI arrived to do that faster. We said we were for credentials, and the market keeps asking what those credentials actually prove. No wonder it feels like standing on a house of cards, waiting for someone to pull the foundation out.
I want to offer a different frame. Not reassurance - something more useful.
Universities were never skill factories. That was a role we took on because industrial society demanded it. We got good at it, built systems around it, started to believe that's what we were. But underneath the industrial overlay, something else was always happening. A student arrives at eighteen thinking they know who they are, what they believe, how the world works. Four years later, someone else walks out. Not just someone with more knowledge - someone who has dissolved and reformed. Whose certainties fell apart. Who discovered what they actually think, what they actually value, who they actually are.
This is not training. This is formation.
The word I keep returning to is chrysalis. Not a container that holds something safely - a chrysalis that transforms something fundamentally. The caterpillar doesn't just grow wings inside. It dissolves. Becomes undifferentiated goo. And from that formlessness, something entirely new organizes itself into being. The dissolution isn't a bug in the process. It's the entire point.
Universities, at their best, are humanity's chrysalis. The protected space where young humans can fall apart and reassemble into something more capable, more coherent, more fully themselves. The industrial model obscured this, but the chrysalis function never stopped. It just went underground, happening despite our curricula rather than because of them.
Here's where I break from the current narrative: in an AI world, more people need the chrysalis, not fewer.
Everyone is saying the opposite. Learn a trade. Learn online. Skip the expense and debt. But think about what happens when AI can do any vocation better than humans, when thinking itself is no longer our exclusive domain. Skills can be trained by AI faster and cheaper. Knowledge is freely available to anyone with internet access. What remains is what the chrysalis produces: a human who knows who they are, who understands their place in society, who can form meaningful connections, who has struggled and stretched and reformed enough to hold complexity without breaking.
A human who can train the dragon rather than get burned by it.
That formation cannot happen in the workplace, where failure has real consequences and productivity is demanded now. It cannot happen through AI tutoring, no matter how personalized. It cannot be compressed into a YouTube series or a bootcamp. It requires time. Friction with other minds. Safe space to dissolve. A container strong enough to hold someone while they fall apart and become something new.
But here's the question no one is asking in all our anxious conversations about AI and education: what does AI actually need from humans?
We keep asking what humans need to learn about AI - how to use it, how to not be replaced by it, how to stay relevant. All defensive postures. All assuming AI is the active force and humans must adapt or perish. But genuine partnership requires two parties who each bring something essential, each lacking something the other provides.
What does AI lack? Direction. Purpose. The sense of what matters.
AI can optimize toward any goal but cannot generate which goal is worth pursuing. AI can think beautifully about anything but cannot decide what is worth thinking about. AI can produce strategy, analysis, content, code - endlessly, brilliantly - but it cannot answer the question: what for?
Someone has to provide that. Someone has to point the dragon. Someone has to decide what all this intelligence should be for. That is not a soft skill. It is the hardest skill. It is what humans contribute to the partnership. And it requires a formed self - someone who has done the work of figuring out what they value, what matters, what's worth doing.
You don't develop that capacity by learning to prompt AI effectively. You develop it by struggling with hard questions over time, by failing and reconsidering, by dissolving and reforming. By going through the chrysalis.
So what actually changes? The building stays - the chrysalis function remains. But the scaffolding, how we teach and what we emphasize, changes completely.
GenED still matters, but not as content to absorb. As foundation deep enough to evaluate AI's treatment of that content. You cannot catch AI's error in ethical reasoning if you have never reasoned about ethics yourself. The struggle builds the judgment. Majors still matter, but the capstone isn't "produce a deliverable." It's "demonstrate you can direct AI toward something novel in this domain - and explain why it was worth doing."
AI isn't a separate course bolted on to existing programs. It's woven through everything - not as a tool students learn to use, but as a partner they learn to understand. Think of it like driving school. We don't teach students to walk everywhere to prove they don't need cars. We teach them to understand the machine: what it can do, what it can't, when something's going wrong, how to take control in unexpected situations, how to get where they decided to go. Students don't learn to use AI. They learn to lead AI. There's a difference.
For administrators wondering what to do Monday morning, I'd say this: start by listening before you mandate. Don't call an all-faculty meeting to announce the new AI initiative. Find the six or eight faculty members who are already experimenting quietly - every institution has them. The ones who changed their assignments this year, who are wrestling with this in real time, who haven't announced anything but are trying things. Ask them one question: what are you seeing? Not what they think the answer should be. What they're actually observing. What's working. What's breaking. What students are doing that surprises them.
After that conversation, write one page: 'Here's what our faculty are already discovering - what's working, what's breaking, what questions remain unanswered.' Then pick one experiment that's showing promise and resource it. Give that faculty member time, support, a cohort of willing colleagues. Let them build a pilot that others can learn from. You're not designing the transformation from the top. You're finding where it's already sprouting and giving it what it needs to grow.
Then consider reorganizing around problems rather than just subjects. AI doesn't respect disciplinary boundaries, and neither do real problems. Start small - one pilot, a genuine challenge no one has solved yet, with students and faculty from multiple domains thinking together rather than handing off pieces to their respective silos. This is where students learn to direct AI toward complex challenges that matter.
And protect time for not-knowing. The reflex when facing disruption is to add - add AI courses, add ethics modules, add new requirements. But addition isn't transformation. Create actual space where students stay with hard things long enough to dissolve. Problems that can't be solved by semester's end. Questions their professors don't know the answers to either. The discomfort is the point. That's where formation happens.
What do we tell boards who want numbers? That we're not adding AI training to our existing model - we're positioning for what comes next. When every institution can provide AI-enhanced skill delivery, we'll be the ones producing humans capable of directing that intelligence. That's not a niche. That's the only thing that can't be commoditized.
What do we tell parents? That their child won't just learn to use AI. Their child will become someone who knows where AI should go. We don't produce workers who keep up with AI. We form humans who lead it. That requires the kind of development that only happens in a chrysalis.
What do we tell ourselves? That we're not obsolete. We just forgot what we were. The chrysalis was always there, underneath the factory. Now the factory is being automated. What's left is what was always essential.
The window for defining this isn't infinite. Right now, everyone is uncertain. Students don't know what education should look like. Parents don't know what to demand. Corporations don't know what to expect. Accreditors don't know what to measure. This uncertainty is our opportunity.
The institutions that move now will set the standard. They'll define what AI-era formation looks like. The ones that wait will spend the next decade catching up to definitions they didn't write.
The dragon is here. It will be trained by someone. Will your graduates be the trainers, or will they be asking the trainers for jobs?

