Relational AI

Relational AI

Method 5: The Stress Test

Getting feedback before it matters

Brock Hart's avatar
Brock Hart
Apr 15, 2026
∙ Paid

It’s been a few weeks. Apologies for the lapse.

What’s happened is a good problem to have: I’ve been overwhelmed with response. Groups that want to learn how to work with AI like a teammate. Senior executive teams at Fortune 500s curious about relational AI and what it means to augment teams rather than just automate existing processes. Organizations ready to imagine new ways of working that weren’t possible before.

So we’ve been running sessions. Retreats where AI plays a central role on the team—helping everyone think deeper and move faster. Back to back, for weeks.

The other thing I’m hearing from readers, consistently: this series is helping people. People are finding ways to work with AI that make their work life significantly better. And they’re starting to notice how much their peers just aren’t there yet. There’s a gap opening up between people who are learning to think with AI and people who are still treating it like a search engine or a content generator.

Anyway. Method 5.

Last time I walked through Blueprinting—how to explore structure before committing to substance. This week: how to test what you’ve built before the world does it for you.

This one is going to feel obvious if you come from design.

In design thinking, in human-centred design, getting feedback is fundamental. It’s not optional. It’s not something you do at the end. It’s how you iterate. It’s how you take something from interesting with potential to actually working.

Ideas are cheap. Ideas are not hard. Even great ideas that feel great aren’t that hard. What’s hard is knowing whether they’ll actually land—how someone will respond to them, react to them, use them. The only way to know that is to get feedback. Take it on, decide what you’re going to do with it, iterate, get more feedback. That’s how we make good work great.

The challenge has always been twofold. First, getting enough feedback, from enough perspectives, quickly enough to actually use it. You can’t convene your advisory board every time you have a half-formed idea. You can’t ask your team to drop everything and review a draft that might not even be the right draft yet. It’s even harder to get real feedback from external stakeholders quickly.

Second, and this is the harder problem: you can’t see your own blind spots. Once you’ve been inside an idea long enough, you can’t evaluate it objectively anymore. You can’t see the assumptions holding it up because they feel like facts. You can’t spot the gaps because your brain has already filled them in. The version of the plan in your head is better than the version on the page, and you can’t tell the difference.

This isn’t a character flaw. It’s how cognition works. Confirmation bias means that once you’ve committed to an idea, your brain filters information to support it. You notice what confirms your thinking and discount what challenges it. By the time you’ve finished building something, you’re often the least qualified person to evaluate it.

AI changes both of these problems.

You can get feedback instantly, from any perspective you can describe, as many times as you want. And because the AI doesn’t share your mental model, it can look at your work from outside and name what’s invisible from inside. It doesn’t have your blind spots. It has different ones—but that difference is exactly what makes it useful.

The feedback grid.

There’s a structure I use constantly for getting useful feedback. Four questions:

  • What do you like about this?

  • How would you improve it?

  • What questions do you have?

  • What else does it make you think about?

Simple. But when you run that grid across multiple perspectives—a skeptical CFO, an enthusiastic early adopter, an author whose thinking you admire, a confused first-time user—you get a map of how the idea lands from different angles. Each perspective surfaces something different.

The grid works because it separates different kinds of feedback that usually get tangled together. “What do you like” surfaces what’s working—which you need to know so you don’t accidentally break it. “How would you improve it” gets specific suggestions. “What questions do you have” reveals gaps and confusions. “What else does it make you think about” often produces the most interesting material—connections and extensions you hadn’t considered.

Virtual advisory groups.

One of the most powerful applications of this: creating virtual versions of the people whose perspective you need.

If you’re an executive director preparing for a board meeting, you can recreate your board in a thread. Describe each member—their background, their concerns, the kinds of questions they tend to ask, the dynamics between them. Then bring the board package and say: “Walk me through how each of these board members responds to this. Where do they push back? What questions do they ask? Where does this conversation get stuck?”

That’s not the AI giving you answers. That’s the AI helping you prepare. You’re role-playing the meeting before the meeting happens, so you’ve already thought through the hard questions before you’re in the room.

You can do this with any set of perspectives. Thinkers and practitioners whose work you respect, giving feedback on ideas you’re developing. Stakeholders you’ll eventually need to convince, stress-testing your arguments before the real conversation. The feedback grid gives structure. The perspectives give range.

This isn’t about getting advice. It’s about creating jumping-off points for your own thinking. Different angles. Different reactions. Things you wouldn’t have considered because you’re too close to the idea.

The moves.

The Feedback Grid is the foundation for me. You share an idea and ask for structured feedback from a specific perspective. “You’re a skeptical operations director who’s seen initiatives like this fail before. Using the feedback grid—what do you like, how would you improve it, what questions do you have, what else does it make you think about—give me your honest response to this proposal.”

The structure matters. Without it, you get generic responses. The grid forces specific, usable feedback. Try doing it both ways, or use your own framework for feedback—the point is that you start understanding how to work with AI deliberately.

The Red Team assigns the AI a hostile perspective and asks it to attack your work. “You’re on the board and you’ve been skeptical of this initiative from the start. You’re reading my proposal and you need to decide whether to fund it. What’s your honest reaction? Where do you push back?”

This surfaces objections that pure logical analysis misses. People don’t reject proposals only on logic. They reject them because the timing feels wrong, or the risk appetite isn’t there, or they don’t trust the team to execute. The hostile perspective captures these.

You can also run a steel man test—instead of asking for the weak objections, ask for the strongest. “If I’m wrong about this, what does the best version of the opposing case look like?” If your idea survives the strongest counter-argument, it’s ready. If it doesn’t, better to know now.

The Pre-Mortem simulates future failure to fix the present plan. “Imagine we’re sitting here a year from now and this whole thing collapsed. I’m trying to figure out what happened. Walk me through it. What were the early warning signs we missed? Where did the plan break down? What did we assume would happen that didn’t?”

The temporal shift is what makes this work. “What could go wrong?” is speculative and your brain resists it. “What went wrong?” is narrative and your brain engages with it. By placing the failure in the past tense, you make it psychologically real enough to analyze seriously.

There’s a complementary move worth pairing with every failure pre-mortem: the success pre-mortem. “Now imagine it went better than expected. What happened in the first thirty days that set it up to succeed?” The failure version tells you what to avoid. The success version tells you what to build.

When to use it.

Before any major commitment. Before you submit the proposal, ship the product, sign the contract, or present to the board. The Stress Test is cheapest before commitment and most expensive after.

When you notice yourself getting defensive. If someone raises a concern and your first instinct is to explain why they’re wrong, that’s a signal. You’re too attached. Run a Stress Test before the attachment costs you.

When the plan feels too clean. If everything seems to fit together perfectly and there are no obvious problems, that’s not a sign of a good plan. It’s a sign you haven’t tested it. Real plans have tensions and tradeoffs.

After Blueprinting. Test the structure before you invest time filling it in. Catching a structural flaw in a skeleton costs minutes. Catching it in a finished document costs hours.

What’s underneath this.

Feedback is how we make things better. That’s not a controversial claim. Everyone knows this. But most of us don’t get nearly enough feedback, because getting feedback is logistically hard and socially awkward. We don’t want to impose. We don’t want to hear criticism. We can’t convene the right people quickly enough.

And even when we do get feedback, we can’t always hear it. Our own blind spots filter what gets through.

AI addresses both problems. You can get feedback instantly, from any perspective, without anyone’s time getting wasted or feelings getting hurt. And because the AI is outside your mental model, it can name the things you can’t see from inside.

But here’s what matters most: the Stress Test doesn’t just improve the work. It improves the thinker.

Every time you run a pre-mortem and discover an assumption you didn’t know you were making, you become slightly more aware of how you build assumptions. Every time a Red Team perspective surfaces an objection you hadn’t considered, you get better at anticipating objections on your own. Every time you run the feedback grid from multiple perspectives, you develop a richer sense of how ideas land differently for different people.

The method is training your own critical thinking, using the AI to show you the patterns in your blind spots.

Over time, the best practitioners won’t need the Stress Test less. They’ll need it differently. They’ll move from “find my blind spots” to “here’s where I think my blind spots are—check my work on that.” The method evolves from discovery to calibration. And that evolution, from unconscious limitation to conscious self-monitoring, is the deepest form of augmentation this framework offers.

User's avatar

Continue reading this post for free, courtesy of Brock Hart.

Or purchase a paid subscription.
© 2026 Brock Hart · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture