The Awkward Beginning
There’s a joke on the internet that millennials are the only ones who actually know how computers work.
I was thinking about this the other night and asked my daughter: did they ever teach you how to use a computer?
She gave me the look. The horrified, slightly disgusted look that only a teenager can offer. “What are you talking about?”
It’s obviously not just millennials. But we grew up in a time when this stuff was all brand new. And we were kids. Kids don’t play by regular rules. Play takes over.
For me, it was always an orientation toward what can this do? Instead of fixating on what it can’t.
Slow machines. Clunky interfaces. Nothing was intuitive. You had to figure out how it actually worked because nothing just worked yet.
We didn’t learn from a curriculum. We learned by trying stuff. Breaking stuff. Figuring out what happened when you clicked the wrong thing.
And because of that, we built intuition. The kind that stays with you long after everything gets easier. That’s the joke about millennials. We’re asked all the time how to do something on the computer because we have an intuition for it.
AI is in that same awkward beginning right now.
It’s powerful. But it doesn’t all quite work. You’re figuring stuff out. You’re learning in less than ideal conditions where everything doesn’t just click into place.
And that’s exactly why this moment matters.
We’re trying to create policy around something that is just starting. And in doing that, we’re losing the opportunity to imagine and play and think.
So which side do you want to be on? Do you want to figure out what this can do? Or do you want to delight every time it fails?
It’s going to fail all the time. That’s shooting fish in a barrel. If that’s the game you want to play, fill your boots. But you’re missing the opportunity.
We don’t know which platforms will win. We don’t know which AIs we’ll end up with. We know they’ll keep changing and evolving. I can imagine a world where you’re just talking to your AI while you work—seamlessly, naturally, nothing like the chat window we have now.
It’ll get easier. Look how much it’s changed already.
But the people who learn it now in the roughness are building intuition that will transfer to whatever comes next. Not because they memorized prompts or they know how to use Copilot. Copilot might not even be the thing everyone uses in the future.
It’s because they understand how to think with these tools. And that works across models. That’s the skill that lasts.
I’ve been working with AI for two and a half years. Not the way most people use it. Not “write me an email” or “summarize this document.” Although I do that too.
I’ve been thinking with it.
Building context over time. Coming back to the same threads. Letting it learn how I work so it can actually help me think—not just produce.
There’s a discipline here. I call it Relational AI. Methods, workflows, structures. A way of working that compounds instead of resetting every conversation.
Roger Martin has this line I keep coming back to: nothing truly new had the benefit of proof in advance.
When you’re doing something that’s actually different—not a derivative of what we’ve been doing, but fundamentally new—you don’t get to prove it first. You just have to do the work and see what happens.
That’s where we are.
Inside the teams I’m working with, I’m watching it happen.
You teach leaders how to think with AI—not just use it, think with it—and the whole organization starts to shift. People run their own experiments. Teams build shared threads. The quality of thinking changes. This is just starting.
And once people feel augmentation in their hands, they start imagining uses I never would have designed for them. Human creativity takes over.
But it’s not just my client work anymore.
This fall, I started seeing it out in the world. Little signals. Posts online. Small pilots. Teams discovering shared context. Leaders describing breakthroughs they can’t quite articulate yet.
It’s happening. Quietly. At the edges. And if you’ve been paying attention, you can feel that this is the start of something.
Most organizations are still trying to “adopt AI.” Roll it out. Train people on prompts. Hope it sticks.
It doesn’t stick. It becomes shelfware. Because they skipped the part that actually matters.
You can’t get to collective intelligence by skipping individual practice.
You have to teach leaders how to think with AI first. Then you connect those practices into something bigger—shared context, team threads, organizational memory that compounds.
That’s where the real gains are going to come from. Not from AI adoption. From augmented leadership.


