Holding Less
What I learned using AI through my daughter’s health crisis.
Two weeks ago, my daughter collapsed at the front door.
Over the next ten days, she was admitted to hospital, transferred to a children’s hospital an hour from home, diagnosed with a rare neurological condition, underwent treatment, and came home to begin recovery.
This is not her story. And it’s not the whole story—my wife and I traded shifts, our family, friends and community showed up for us, the healthcare professionals were extraordinary. But there’s a piece of it I want to tell. The piece about what happened because I had AI as a thinking partner through something like this.
When she collapsed, we went to the emergency room. That’s what you do. Sitting in the car after, I downloaded everything into a thread—all the symptoms, everything that had happened over the previous days, all the pieces that weren’t fitting together. At this point, working with AI is second nature to me. When I need somewhere to put the pieces while I try to make sense of them, that’s where I go.
We were sent home. Viral infection. She’d get better on her own.
As I updated the thread on what happened at the ER, it kept telling me—every single message—you need to go back. You need to push. And that’s where I got skeptical of how useful this might be. We were already paying attention. We were already monitoring her. The doctors had said if anything worsens, come back.
I said to it: look, I’m balancing what you’re saying with what I’m observing, what I’m seeing, and what I’ve heard. And it said, yeah, fair enough. Here’s what I think you should still pay attention to. But let’s hope she just gets better.
That’s what I mean by using it to think. Not just accepting an output. Pushing back. Offering context. Letting it adjust.
What I didn’t realize at the time was that I wasn’t using AI for answers. I was using it to reduce the amount I had to carry in my head.
Then things got worse. And we went back. And this time, we didn’t go home for ten days.
I use AI as a thinking partner for work. That’s where I’ve built the practice. Strategy. Writing. Decision-making. I hadn’t really used it to navigate something like this—a personal crisis, fast-moving, high stakes. But in an incredibly difficult situation, I just used the skill set and resources at my disposal.
I wasn’t using AI to replace the doctors. But I was using it to show up better alongside them.
Before a doctor came to explain a lumbar puncture or a nerve conduction study, I’d already read about it. I’d already processed the scary words in the chat, privately, so that when the doctor said them in front of my daughter, I didn’t flinch. I’d rehearsed my composure. I could just be steady for her.
One night—the hardest night—I cried my guts out. I recorded myself saying everything I was thinking and feeling. Talked for 35 minutes. Gave all of that to the thread. It shifted from helping me navigate the process to helping me regulate. Process the emotions. Hold the weight.
I fell apart in a text box so I could walk back into her room the next morning clear-eyed and ready for a long weekend.
During that weekend, the doctor said she could go home tomorrow.
I checked the thread. It said: that’s great, but you likely need three people to sign off. You’ve got one. You still need neurology and physio. Temper your expectations.
Instead of spending hours thinking we were going home, I gently suggested to my daughter that we might need two other people to say yes before we could leave. Maybe it’s true, and that would be so exciting. And maybe we’re still days away.
She was frustrated. Angry. Multiple days in at this point. But she was prepared. And when the discharge didn’t happen the next day, she wasn’t blindsided.
We built a strategy for the weekend together. Focus on the work we need to do so we can get home. Don’t worry about the discharge date. Just go a day at a time.
That wasn’t the AI telling me to do that. That’s my parenting philosophy. Be honest. Communicate. Prepare her for what might happen. But the thread helped me see it clearly enough to act on it. I wouldn’t have realized that we needed two additional people beyond her primary doctor to say we were good to go.
Throughout all of this, I never let the AI override the professionals in the room. I was constantly cross-referencing what it said against what the physiotherapist was seeing, what the nurses were observing, what the doctors were recommending. AI can get things wrong. The physiotherapist standing next to the bed does not. AI prepared me for the room. It never replaced the room.
And over and over, what the AI suggested, the humans validated. What the humans told me, the AI had often already prepared me for. They kept corroborating each other.
I used this intentionally. To navigate a complicated process. To regulate. To hold some of the emotional and cognitive load so I didn’t have to carry all of it.
Using AI to show up better. To be more present. To hold less in my head so I could hold more in my arms.
That’s not hype. That’s human.
Here’s what I keep thinking about.
People are already using AI this way. In a recent Globe and Mail article—that a friend sent me, oddly enough, during those two weeks—a survey found that over 50% of Canadians are using ChatGPT for health information. They’re navigating healthcare systems, education systems, all the systems that are complicated and hard to move through.
They’re doing it in the parking lot anyway. Without guidance. Without practice. Without anyone helping them do it well.
I do teach this. I teach organizations and leaders how to use AI as a thinking partner. But that’s not what I’m talking about here.
The question isn’t whether people will use AI in crisis. They already are. The question is whether we will teach them how to use it well.
Through schools. Through public libraries. Beyond AI basics. Beyond prompt tips. A different kind of literacy—one that equips people to navigate uncertainty while using these tools responsibly.
And beyond teaching, we could be building with it. Services that help caregivers navigate systems. Tools that support people through crisis. Ways of working that reduce cognitive load and protect human capacity for the things that actually matter.
We could be doing this safely. With evidence-based practice. With guardrails. So that the technology and what it’s extraordinary at can be deployed in ways that are actually helpful.
I came out of this more committed than ever.
I used this intentionally to navigate a complicated process and experience. I used it to regulate. To hold some of the emotional and cognitive load. And it worked. Those things I was trying to do, it helped me do them.
It held what needed holding. And I showed up better for my daughter because of it.



Sending love to all of you. What an amazing job you've done navigating (and sharing) things. The idea of mental offload is so compelling, too.