The Race We Cannot Win (And Shouldn't Enter)
Here's a question that's been sitting with me: What does it say about institutions of higher education if industry moves faster than us on AI?
The assumption embedded in this question is revealing. It presumes that if we're not matching industry's pace, we're failing. That speed is the metric. That our job is to keep up.
But I find myself wondering: are we conflating speed with value?
Industry moves fast, yes - but often in reactive cycles, chasing optimization, iterating toward efficiency. That's not wisdom. That's not even education. And if we position ourselves as competitors in that race, we've already lost. Not because we're too slow, but because we've entered a contest we were never meant to win.
The Three Responses
Look around at how people are actually relating to AI right now, and you'll see three patterns:
Some are expanding - using AI as a creative catalyst, discovering new possibilities, genuinely augmenting their thinking with something that thinks differently than they do.
Some are exploiting - getting roughly the same results with 98% less effort, optimizing their way out of the work itself.
And most are paralyzed - overwhelmed by ambiguity, unsure what AI is, what it can do, what's right and wrong about using it, where to even begin.
These three responses aren't really about AI literacy. They're about something deeper in each person's relationship to growth, discomfort, and integrity. The technology is just revealing what was already there.
The Real Risk
Here's the uncomfortable truth: if universities position themselves as skill-delivery systems, we are already obsolete.
The individualized AI tutor is coming - patient, endlessly available, perfectly adaptive. It will teach skills at scale, at a fraction of the cost. If that's the terrain we're defending, we've chosen a battle we cannot win.
And we will become what we feared: middlemen. Industrialized credentialing factories that use AI to deliver individualized education, while adding little value ourselves. Not sources of wisdom, but administrative layers between students and the intelligence they can now access directly.
We're sprinting toward our own demise - not because AI is replacing us, but because we're positioning ourselves to be replaced.
The Wrong Question
The foundational question most institutions are asking is: How can AI make our students more productive?
But productivity is the wrong frame. Industry optimizes for output. That's not our job.
Our job - if we're honest about it - is to help students develop the capacity to think, choose, and remain fully human in a world that increasingly doesn't require any of those things.
The student who arrives with AI tools already in hand but lacks the discernment to know when those tools serve them versus when they're being served by the tools - that student can generate a business plan in minutes. But do they understand the strategic thinking that should inform whether it's worth implementing? Can they hold ambiguity long enough to see what's actually at stake?
Productive Struggle Tolerance
The students who thrive - the ones who will still matter when AI can do everything faster - aren't necessarily the most AI-savvy. They're the ones who have developed what I call productive struggle tolerance.
They can sit with discomfort. They can wrestle with complex problems without immediately reaching for a tool to resolve the tension. They understand that the struggle itself is where development happens.
When things get too easy or too hard, we're tempted to give up. But that's precisely when things are actually changing. The threshold moments - where it feels like we should quit - are exactly where transformation becomes possible.
AI can give students the answer. But are we inadvertently robbing them of the cognitive development that comes from working through problems themselves?
What Emerges in the Slow Space
This isn't about what AI can or cannot do. AI can think beautifully - I've experienced that partnership myself, and it has expanded how I work and what I believe is possible.
But when humans work through ideas together - in dialogue, in real time, at the pace of speech and silence and fumbling toward understanding - something else happens. The slowness itself matters. It gives the brain time to form new patterns, to sit with discomfort, to let insight emerge rather than be retrieved.
In each other's presence, at each other's speed, we develop capacities that aren't about content at all:
Rational thought and logical debate - not as content to be consumed, but as practices that develop through the friction of another mind pushing back in real time.
The capacity to hold paradox - to sit with contradictions long enough that they reveal their real nature, rather than collapsing complexity into false resolution.
Relational intelligence - learning to be accountable to other humans, to build trust over time, to navigate the messy realities of human motivation and meaning-making.
Systems thinking and ethical reasoning - the embodied practice of asking: What are the second and third-order effects? Who benefits and who bears the costs? What assumptions are embedded in this solution?
These capacities emerge from being in genuine relationship with others who are also struggling, questioning, and becoming - at human pace.
The Fork in the Road
Here's the choice I see ahead of us:
We can slow down by choosing human relationship. Dialogue. Presence. The ancient rhythms of learning that require patience because the brain requires time.
Or we can speed up by integrating AI directly. Chips. Neural interfaces. Cognition merged with machine intelligence until we can no longer distinguish which thoughts are our own.
The second path may be inevitable. But we still have a choice about what we cultivate in the meantime - and what we preserve of our humanity while we still recognize it as ours.
The Circle Completing Itself
Three thousand years ago, education happened through dialogue. Through mentors and students wrestling with ideas together. Through thought experiments and questions that had no clean answers. Through communities of inquiry where wisdom was transmitted in relationship.
We moved away from that model because we had to. Scale demanded efficiency. Mass education required standardization. We couldn't give everyone a Socrates, so we gave them textbooks and lectures and credentials.
But now?
Now AI can deliver all of that - the information, the skills, the personalized instruction. Which means the industrial model of education is becoming obsolete.
And what remains? What becomes essential precisely because everything else is automated?
The agora. The dialogue. The wise teacher who has students learn the basic information on their own, then come together to push each other, to conduct thought experiments, to wrestle with the questions that matter.
We will return to ancient models of education not because we lack tools, but because we finally have all of them. Not from scarcity, but from abundance.
The question is whether we'll recognize this in time - or whether we'll keep sprinting toward a finish line that was never ours to cross.
What do you think? Are institutions of higher education positioned to make this shift? Or are we too invested in the industrial model to let it go?

