Relational AI

Relational AI

Method 6: Recursive Drill-Down

Staying in the conversation

Brock Hart's avatar
Brock Hart
Apr 23, 2026
∙ Paid

This is the last of the six methods. And it’s the one that took me longest to figure out. It’s not complicated, but my instincts were wrong.

When I first started working with AI, I did what most people do. I’d give it a prompt, read the output, and make a decision: good enough, or not. If it was good enough, I’d use it. If it wasn’t, I’d either accept it with mild disappointment or abandon the AI entirely and rewrite the thing myself.

Both of those instincts waste the most valuable part of working with AI.

The shift happened when I started staying in the conversation. Instead of accepting or abandoning, I started giving feedback. “This part works, but this part doesn’t land. Try it again with more emphasis on X.” Then reading the revision. Then giving more feedback. Five exchanges. Eight exchanges. Ten.

And the quality went up dramatically.

Once I started working this way, I couldn’t believe I’d ever worked any other way. It’s so obviously better. But I had to discover it, because my default instinct—evaluate and move on—was getting in the way.

The first draft is raw material.

I think most importantly: the first output isn’t the product. It’s the raw material. A zero draft. A block of clay that exists so you have something to shape.

The value isn’t in what the AI generates on the first pass. The value is in what happens over the next several exchanges as you push back, redirect, expand, compress, and refine until the output matches your actual standard.

This sounds obvious when you say it out loud. But watch how most people use AI and you’ll see they’re treating the first draft as the product. They read it, judge it, and move on. If it’s not right, they assume the AI can’t do what they need.

What they’re missing: the AI can revise faster than you can rewrite. Telling it exactly what to change and having it execute in seconds is dramatically faster than doing it yourself. But it requires a skill most people haven’t developed: the ability to articulate precisely what’s wrong and what you want instead.

The taste problem.

This method works for me because I’ve been writing my whole career. Reports, blogs, newsletters. I’ve written enough that I know my voice. I know what sounds like me versus what doesn’t. I know my taste.

So when I read an AI output and something feels off, I can usually name what’s wrong. “This is too formal.” “This hedges too much.” “This doesn’t sound like how I’d actually say it.” I can direct because I have something to direct toward.

If you’re earlier in that journey—if you’re still developing your sense of what you gravitate toward, what your voice actually sounds like—the method might work differently for you.

I think about this like when I was a brand new designer. Before I’d had enough exposure to different clients, different brands, different visual styles, I had things I liked. But my own style, my own taste—not just replicating someone else’s—that came through doing it. Through play. Through exposure.

We used to get excited when a new design awards annual came out. We’d flip through the pages to see what was winning, what people were doing, just to get a sense of the landscape. You develop taste by exposing yourself to good work and then trying your own version of it.

So if you’re developing your taste as a writer, you might need reference first. Something to style this after. Something you like that you’re trying to do your own version of. “Here’s a piece of writing I admire. Help me understand what makes it work. Now help me write something in that spirit.”

But here’s the thing, the Drill-Down itself can help you develop taste. Every time you read an AI output and say “this doesn’t work” and then have to explain why, you’re making your standards more explicit. You’re learning what you like through the act of refining.

Sometimes I’ll give the AI feedback and realize I’m articulating something I didn’t know I thought. The act of explaining why it doesn’t land clarifies my own standards. I’m learning about myself through the refinement process.

So the method works in two modes. If you already have taste, it helps you articulate and apply it. If you’re developing taste, it helps you discover and build it.

What this actually looks like.

I should say something about what this process actually looks like in practice. Because it’s messier than writing here often conveys.

The clean version of giving feedback would be something like: “Make it warmer. Keep everything else the same.”

But that’s not how I actually do it. What I actually do is closer to a voice dump. I read the draft, something doesn’t land, and I start talking. I might say something like:

“Okay, so this section—it’s technically accurate but it feels too formal. Like, I would never say ‘leverage the capabilities’—that’s consultant-speak. And the third paragraph, there’s a phrase in there, ‘fundamentally different cognitive posture’—that’s not how people talk. I want this to feel like I’m explaining it to someone I respect, not presenting it to a committee. Also, remember we’re always trying to keep psychological safety in mind here—we’re not making anyone feel bad for using a tool in a way that felt intuitive to them...”

That’s a two-minute voice dump. Rambling. Not cleanly structured. But the AI gets it. It takes all of that context and reflection and produces a revision that addresses the specific words I flagged, the tone I’m reaching for, and the underlying intention I mentioned.

The clean prompt—”make it warmer”—is the summary, not the input. You don’t have to write clean prompts. You can just react, reflect, and let the AI extract what you mean.

This connects back to Method 2, Voice Dump Synthesis. Your reaction to a draft is itself a voice dump. You’re not writing instructions, you’re thinking out loud about what’s working and what isn’t. The AI turns that into action.

The cognitive shift.

Something interesting happens when you work this way. You’re not writing anymore. You’re evaluating and directing.

That’s not easier than writing. It’s different. And it’s actually demanding in its own way because you have to articulate your standards instead of just feeling them. “This isn’t right” is easy. “This isn’t right because the tone is too distant and the third paragraph assumes knowledge the reader doesn’t have” is hard. But that articulation is where the skill develops.

I think of it like working with a talented junior colleague. You wouldn’t rewrite their draft from scratch. You’d send it back with specific notes. “The structure is good but the opening doesn’t hook. Try leading with the story instead of the context.” Same dynamic here. The AI does the mechanical work. You do the higher-order work of knowing what good looks like or figuring out what good looks like through the process.

The compounding effect.

And then, you can do this, and it’s very cool.

The instructions you give while iterating don’t disappear. They become prompts you can use next time.

Let’s say you’re writing a report that goes to the same audience regularly. You’ve refined the writing, the style, the level of detail. They love it. Everyone’s feeling good about it. In that thread, you can ask the AI to capture what you’ve figured out:

“Based on how we’ve refined this piece, write me a prompt I can use next time we’re doing this kind of work. Capture the voice, the style, the audience expectations, and the things we kept adjusting for.”

The AI extracts your taste from the interaction. It articulates the standards you’ve been applying, often more clearly than you could have written them yourself. And now you have a prompt that gets you to exactly where you ended up but for the start of next time.

Dialogue generates instruments. The thinking you do while sculpting one document becomes the template for the next one. The voice corrections you make in conversation become your style guide. The structural preferences you articulate while editing become your default instructions.

So there’s a loop: you refine through dialogue, the dialogue reveals your preferences, you ask the AI to capture those preferences as a prompt, and that prompt makes the next first draft better. The two modes—instrumental and dialogic—feed each other.

The moves.

The Tone Shift is about voice and register. Sometimes the content is right but the way it’s said is wrong. Too formal. Too casual. Too safe. Instead of rewriting, you name what’s off, or more accurately, you react to what’s off and let the AI figure out what to change.

The Pivot redirects without starting over. Keep what’s working, fix what isn’t. This is the move that separates experienced users from beginners. Beginners start over. Experienced users isolate the problem, name it, and redirect.

Expansion and Contraction controls depth. These seem like formatting moves but they’re actually prioritization decisions. When you say “expand this,” you’re deciding it deserves more attention. When you say “compress that,” you’re deciding it matters less than you thought.

When to use it.

Always. This is the method with the highest frequency of use. If you’re working with AI and you’re not doing some version of this, you’re leaving most of the value on the table.

After every first draft. The zero draft exists to be refined. Five to ten iterations on a single document is normal. Two is usually not enough.

When something is almost right. The 80% draft is the most dangerous. It’s good enough that you’re tempted to accept it and bad enough that it doesn’t represent your actual thinking. This is exactly where the Drill-Down earns its keep.

When you can feel the problem but can’t name it. Describe the feeling. “Something about this doesn’t land for me yet. I think it’s too safe.” The AI can work with that. It can’t work with silence or acceptance.

What’s underneath this.

The Recursive Drill-Down isn’t just producing better documents. It’s producing a more self-aware writer and thinker.

Every time you explain why something doesn’t work, you make your standards more explicit. Every time you articulate what good looks like for this particular piece, you develop a clearer sense of your own judgment. The method is training your critical thinking, using the AI as a feedback loop.

And unlike the other methods, this one compounds. The instructions you give today become the prompts you use tomorrow. Your taste gets encoded. Your preferences get externalized. The refinement work you do in one project accelerates every project that follows.

And that brings us back to where this whole series started.

The Handshake builds self-awareness about how you work. Voice Dump captures your unfiltered thinking. Pattern Hunter and Blueprinting give it structure. The Stress Test challenges it. And the Recursive Drill-Down refines it until it represents not just your ideas but your judgment, your taste, and your voice.

The six methods together aren’t a workflow. They’re a thinking partnership with your AI.

What comes next

This is the last of the six methods. But it’s not the end.

The methods are a framework for thinking with AI. They’re not a checklist. They combine differently for different kinds of work. They evolve as your practice deepens. Many people will have their own ways of working that become their methods of working with AI. They become second nature and then they become invisible.

What I’m working on next: how these methods connect into larger patterns. The sequences. The workflows. The way a piece of work might move through Voice Dump, Pattern Hunter, Blueprinting, Stress Test, Drill-Down, or skip some and repeat others, depending on what the work needs.

And behind all of that: the Method Engine. Where all of this gets codified into something usable. A reference and tool for use with your AI of choice.

More on that soon.

User's avatar

Continue reading this post for free, courtesy of Brock Hart.

Or purchase a paid subscription.
© 2026 Brock Hart · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture