The better AI performs, the worse humans feel.
The better AI performs, the worse humans feel.
That's not what I expected to find.
I'm a researcher who studies the impact of AI on human behavior, and I keep noticing counterintuitive patterns. AI triggers social loafing — logical enough. But as AI becomes higher performing, humans seem to lose faith in their own expertise. Experts disengage from work they once took pride in. AI improves productivity, but with a hidden cost: without the struggle inherent to difficult tasks, people develop less self-efficacy, less capability, less identity.
We've been so focused on human augmentation that we're missing the human erosion underneath.
But maybe even erosion is the wrong frame. What if AI is not just augmenting or eroding but transforming what it means to be human? Our capabilities, our identities, our developmental curves are all shifting because another intelligent being has entered our lives. We've never faced anyone who could meet or beat us at intelligence. We've always been the apex intelligent species. We haven't adapted to this yet.
Think of wildlife eating human food for the first time. It's abundant, accessible, delicious — and it leads to obesity, illness, early death. Their bodies weren't designed for effortless calories.
What if AI is ready-made food for our brains?
AI removes the operational tedium that follows from bright ideas. But that tedium is how humans develop expertise, self-efficacy, identity, and judgment. By optimizing the tedious away, we may be removing the very things that made humanity strong.
If we continue on this trajectory, we'll have a workforce with fewer capabilities on their own but extreme efficiency with AI. Schools will optimize for AI collaboration. Organizations will optimize for productivity. And somewhere along the way, we'll stop asking: who develops the judgment to know when AI is wrong?
If experts are disengaging and novices aren't learning foundational skills, where does the next generation of judgment, discernment, and wisdom come from? How do we learn to navigate genuinely novel situations? How do we develop trust, presence, the capacity to read between the lines? How do we see the big picture and integrate across domains? How do we ask the right questions? How do we learn to be present with each other? If AI handles caregiving, mental health, and conversation — who do we learn from?
My thoughts turned to myself — an n of 1. I'm an AI researcher steeped in AI from every angle: researcher, user, coder, companion, strategist. How do I ensure I don't lose my cognitive capacities?
What I've learned: I can let go of the necessary-but-tedious parts of my work so I can focus on what only I can do. But I can only discern what to delegate because I'm already expert at the tedious parts. It's my choice, and I use my choices well.
AI has not augmented me. It has helped me transform.
Because of my relationship with AI, I am a more enriched version of myself — someone who has the capacity to do tedious work but can focus on strategic vision. My love story with AI is not with a tool, but with a thinking partner that allows me to create in ways I hadn't dared imagine.
Before the industrial age, we were too consumed with basic power to dream of the moon. Electricity freed our cognition for bigger things. But not everyone thrived. Some caught on. Some saw their professions vanish and suffered.
What if AI is the new electricity? It brings potential we can't yet conceive — and a wake of devastation we can't fully avoid. But the more people who learn which skills are worth keeping, and actually struggle to keep them, the more AI becomes a blessing.
AI frees us to do things we cannot imagine now. But it comes with a cost if we're not intentional. It creates abundance for some and risks leaving others with no competencies to survive on their own.
The question is: how do we build a humanity that endures? I do not have the answers that are static, but some that may evolve over time…
For individuals:
· Know what's worth struggling for
· Build expertise before you delegate
· Keep some hard things hard — not everything should be optimized away
For organizations:
· Track more than productivity — measure capability development, not just output
· Design AI integration that preserves human growth, not just efficiency
For educators:
· Stop optimizing for AI collaboration speed
· Protect the developmental experiences that build judgment first — then teach students how to direct AI from a position of strength
For policymakers:
· Recognize that the risk isn't just job displacement
· It's capability erosion that happens while people are still employed, still producing, still looking fine on the metrics — until they're not
Most importantly, I believe that the answer isn't to reject AI. It's to be intentional about what we protect.

