Letting Go of the Wheel
A few years ago, I was working at a startup building a psychology conversational tool. We were trying to create something safe that could help people work through their problems using different therapeutic approaches. We designed it the way that made sense: separate AI agents for different psychological frameworks. One agent would think about cognitive behavioral therapy, another about psychodynamic approaches, another about acceptance and commitment therapy. They'd all contribute their perspectives, and then a coordinator would synthesize everything into a response.
It was elegant. Interpretable. You could trace exactly which psychological school was influencing which part of the advice. It felt right.
It also didn't work very well.
The problem was that we'd compartmentalized intelligence. Each agent only had access to its slice of the conversation, and when they tried to communicate with each other, information got lost. The system spent so much effort coordinating between modules that it couldn't actually think deeply about the person's problem. We'd optimized for something that made sense to us as humans, not for what would actually help people.
Eventually we tried something simpler: one model, properly prompted, with all the context it needed. No modules. No explicit coordination. Just let it figure out how to integrate different therapeutic approaches on its own.
It worked better. Significantly better.
I'd read Richard Sutton's essay on the Bitter Lesson years before this. I thought it was fascinating then, an interesting pattern in AI history. But this was the first time I actually saw it happen in front of me. Not as a story about chess computers or Go, but as a decision I was making, watching it play out in real time.
The lesson stopped being abstract that day.
What the Bitter Lesson Actually Says
Richard Sutton is one of the pioneers of reinforcement learning. In 2019, he wrote an essay looking back at 70 years of AI research and noticed a pattern that kept repeating.
It goes like this:
Researchers try to build AI systems by encoding human knowledge about a domain. In chess, you'd program in opening strategies and endgame positions. In speech recognition, you'd build rules about phonemes and grammar. In computer vision, you'd teach the system to look for edges and shapes.
These knowledge-rich systems work pretty well at first. They're satisfying to build because you can see your understanding reflected in the system. You're teaching the computer what you know.
But then something happens. Someone builds a simpler system that doesn't try to encode all that human knowledge. Instead, it just learns from massive amounts of data and computation. It uses general methods like search and learning at scale. And it wins. Often by a lot.
This happened in chess when Deep Blue beat Kasparov using brute-force search instead of chess wisdom. It happened in Go when AlphaGo used self-play and neural networks instead of human strategy. It happened in speech recognition when statistical methods crushed rule-based systems. It's happening now in almost every domain where we have enough data and compute.
The lesson is: general methods that leverage computation beat specialized methods that leverage human understanding. The more computation you have available, the bigger the gap becomes.
And this is bitter because all that work encoding human knowledge, all that cleverness about how we think the system should work, turns out to be less important than just giving it the resources to figure things out itself.
Why It's Bitter
I think the bitterness goes deeper than Sutton maybe intended.
It's not just that our clever ideas don't work as well as we hoped. It's that we really, really want to impose our understanding on these systems. We want the AI to learn the way we think it should learn. We want to be able to look inside and see our theories reflected back at us.
There's ego in that. I can feel it in myself when I'm designing systems. The urge to structure things, to create interpretable modules, to impose my understanding of how intelligence works. It feels productive. It feels like I'm doing something important.
But what I've learned over my short career is that my job is actually much simpler and much stranger than that. I'm basically a context manager. I make sure the system has the right information and the right tools. And then I get out of the way.
When people ask me how I design AI systems now, I think: give it what it needs, then let it do its thing. Don't let my theories about how it should work get in the way of letting it actually work.
It's humbling. The thing that makes progress isn't my understanding. It's me recognizing the limits of my understanding.
The Connection I Can't Stop Thinking About
This isn't just an AI thing.
There's a concept in Taoism called wu wei. It's usually translated as "non-doing" or "effortless action," but that makes it sound passive. What it really means is acting in harmony with the way things naturally flow, rather than imposing your will on them.
The classic image is water. Water doesn't force its way. It finds the path of least resistance, flows around obstacles, and eventually shapes stone. Not through force, but through alignment with gravity and time.
We do the opposite. We label everything. We build mental models. We impose structure on reality because we need to understand it, and understanding means categorizing and systematizing and theorizing.
And that's fine for a lot of things. That's how science works. That's how we build knowledge.
But there's a limit. At some point, the labels we use to understand reality become a barrier to experiencing it. You're so busy categorizing what's happening that you're not actually present for it anymore.
This shows up in contemplative practice all the time. People sit down to meditate and immediately start labeling their experience. "That's a thought. That's a sensation. That's anxiety. That's peace." They're trying to understand their mind by imposing structure on it. But the structure itself becomes another layer of thinking, another thing standing between you and direct experience.
Labels can't understand themselves. You're always limited by the framework you're using.
The Bitter Lesson is the same pattern. We try to impose our understanding of intelligence on AI systems. We build our theories into the architecture. But the systems that actually work are the ones where we provide the context and let learning happen on its own terms.
It's not about having no structure. It's about recognizing which kind of structure helps and which kind gets in the way. In AI, the structure that helps is the meta-level stuff: the learning algorithm, the architecture that can scale, the objective function. The structure that gets in the way is the hand-coded knowledge, the brittle rules, the assumptions about how reasoning must work.
In contemplative practice, there's probably a parallel. The structure that helps might be the practice itself, the commitment to sitting, the framework that supports inquiry. The structure that gets in the way is the constant mental commentary, the need to understand and label everything that's happening.
Flowing With It
I don't have this figured out. I'm still someone who loves understanding things, who gets excited about patterns and connections, who wants to build models of how systems work.
But I'm learning to hold that more lightly.
When I'm working with AI now, I try to notice when I'm imposing structure because it genuinely helps the system learn versus when I'm doing it because it makes me feel smart or because it's interpretable to humans. Those are different motivations with different outcomes.
And I'm starting to notice the same thing in other parts of life. How much energy goes into trying to understand and control things versus just being present with them. How often the need to figure everything out is actually preventing me from seeing what's already there.
The Bitter Lesson isn't really about AI. It's about humility. About recognizing that your theories of how something should work are probably less useful than you think. About learning to provide the right conditions and then getting out of the way.
Whether you're training a neural network or sitting in meditation or just trying to live well, maybe the lesson is the same: stop trying to impose. Start trying to flow.
I'm still figuring out what that means. But I think it cuts deeper than most people realize.
If you've thought about this, or if you think I'm missing something, I'd like to hear from you. You can reach me via email or LinkedIn.