Method 3: The Pattern Hunter
Working with qualitative data at scale
Last week I walked through Voice Dump Synthesis—how to get raw thinking out of your head and into a form you can work with. This week: what to do when you’re drowning in data.
Pattern hunting has been my whole career in many ways.
For the last twenty years, the research part of my work has centred on qualitative data and its analysis. Design research. Customer interviews. Stakeholder conversations. Thousands of sticky notes. Our offices were filled with walls covered in whiteboards—does anyone remember IdeaPaint?—so we could immerse ourselves in the data, not just because it looked cool to write on the walls. Note-takers at workshops feverishly trying to capture not just what people said, but the meaning underneath it, comparing notes later.
What we were always doing was looking for patterns. Not just that something is happening, but what is happening. Why it’s happening. The insight that actually helps you solve the problem.
Quantitative data can tell you there’s a problem somewhere. Qualitative data tells you why that problem exists. It gets you into the stuff that helps you actually do something about it.
The challenge has always been that qualitative data is notoriously intensive to work with. There are methodologies for this—you sample from your set, you see what patterns emerge, you sample again randomly, and either you’re adding to patterns you’ve already identified or you’re discovering new ones. Once you stop finding new things, you have confidence in the data.
But there was always a practical constraint: the client didn’t have the budget for the hours required to work through all of it. You were always working with less than you had.
AI changes that completely. We can work with all of it now. And not just all of it—we can cut it multiple ways. We can take the same data and put it against a framework, then reorganize it differently, then segment by audience and look again. We can work with qualitative information at scale in ways that just weren’t possible before.
This is the thing AI is built to do. It’s extraordinary at finding patterns. The question is whether you’re asking for patterns in ways that actually surface what matters.
Why the default fails.
When people face a wall of information, they reach for the most natural request: “Summarize this.”
It feels productive. You had ten pages; now you have one. But summarization is a destructive process. It works by deletion. It makes the text shorter by removing detail—and the details it removes are almost always the ones that matter most.
The summarization trap: ask an AI to summarize five interview transcripts and you’ll get something like “Participants expressed concerns about communication, timeline management, and resource allocation.” That sentence could describe any company on earth. It’s technically accurate and completely useless. The specific frustrations, the moments where someone hesitated before changing the subject, the contradiction between what the VP said and what the team lead said—all of that gets compressed into nothing.
Summarization gives you less. Synthesis gives you more. That’s the core distinction. A summary reduces ten pages to one. A synthesis finds meaning across ten pages that doesn’t exist in any single one of them. It’s the difference between making information smaller and making it legible.
There’s a structural reason humans struggle with large qualitative datasets. Working memory holds roughly four to seven items at a time. When you’re reading transcript number eight, the specific phrasing from transcript two has faded. You might remember the general topic, but the exact words someone used, the hesitation, the self-correction—those details are gone.
The AI doesn’t have this problem. It can hold the full text of every transcript in its context window and compare specific language across all of them simultaneously. It can notice that three different people used the word “confused” when talking about the same process, or that everyone praised the leadership team in general terms but got specific only when describing problems. That kind of horizontal read across a full dataset is exactly what human cognition struggles with and what AI does well.
Synthesis over summary.
The Pattern Hunter requires a mental model shift. You’re not using AI to process information faster. You’re using it to process information differently.
Summarizing is vertical. It reads each document top to bottom and compresses it. The output is a shorter version of each input.
Synthesizing is horizontal. It reads across documents looking for connections, tensions, and patterns. The output is something new—something that didn’t exist in any single source but emerges from the relationship between them.
This turns the AI from a reading assistant into an analytical partner. But it only works if you explicitly ask for it. Left to its defaults, the AI will summarize. You have to redirect it toward synthesis, and you have to be specific about what kind of synthesis you want.
The moves.
In my practice, The Pattern Hunter has three moves most often. These are the things I find myself doing over and over when I’m trying to make sense of a lot of material. Take what’s useful, adapt what makes sense—the goal is finding what works for your kind of data and your kind of questions.
Theme Extraction is the first pass. You ask the AI to read across multiple sources and identify the connecting threads. “Here are five interview transcripts. Don’t summarize them. Find the 3 – 5 themes that emerge across all of them. For each theme: name it, describe it in two sentences, and cite which transcripts support it.”
The “don’t summarize” constraint is essential. The theme count prevents over-categorization. The citation requirement makes themes traceable back to source material—which is critical for verifying that the pattern is real rather than hallucinated.
But here’s the thing: don’t be lazy about this. Asking for “the themes” will give you dominant patterns—and dominant patterns are often things you already know. You also want to understand weak patterns. You might want to isolate a certain segment and look at their responses specifically. There are many ways to look for patterns in data, and just asking for themes once is the least interesting version.
The Insight Distinction pushes the AI past the obvious. The first round of themes is almost always too generic. “Communication was a recurring concern” isn’t an insight—it’s a category. You push: “What specifically about communication? What’s broken? Quote the language people used.” Or: “Separate what I already knew from what’s actually new here. What’s surprising? What contradicts what we expected?”
This is the move that separates useful analysis from filler. The first pass identifies the territory. The insight distinction finds what’s actually worth paying attention to.
The Cross-Reference maps your data against something else. A framework. A strategy document. Industry benchmarks. “Here is my analysis. Here is a known framework. Are we validating the framework, challenging it, or finding something it doesn’t account for?” The three-option framing prevents binary thinking. It forces the AI to commit to an interpretation rather than just listing observations.
When to use it.
The Pattern Hunter has a narrower trigger set than Voice Dump Synthesis, but when it fires, it’s high-impact.
When you’re drowning in qualitative data—after a research sprint, a series of interviews, a week of customer calls, months of meeting transcripts. Whenever you have more unstructured input than you can hold in your head.
When you need to bridge research and strategy. You’ve gathered the data. Now you need to know what it means. The Pattern Hunter turns “I have a lot of notes” into “here’s what the notes are telling us.”
When you suspect there’s a pattern you can’t see. Sometimes you feel the connections forming but can’t articulate them. You know transcript three and transcript seven are related somehow but you can’t name the thread. The AI can hold both simultaneously and tell you what links them.
What’s underneath this.
Here’s what I keep thinking about.
We used to say—especially in healthcare—that organizations were drowning in data and thirsty for insight. The problem was the lack of qualitative data, or the inability to work with what they had.
We don’t have that limitation anymore.
If you’re an organization with 500 staff, you can engage those people and find patterns from your entire workforce around things that are strategically important. Here’s a challenge we’re facing—everybody, tell us what you think. And then actually work with all of those responses, not just a sample.
Employee engagement surveys that take months to analyze could be real-time. Weekly check-ins, ten-minute conversations at existing meetings, patterns emerging across your whole organization as they happen. Targeted interventions based on what’s actually going on, not what was going on three months ago.
And it doesn’t have to be structured the old way. We used to need people to fill out empathy maps and frameworks because we needed structure to make the data workable. Now you can just have a conversation. The AI can find the structure in natural speech.
This is actually about getting to be more human. Because AI is able to support nuance—the varied and particular ways that people experience the world—we get to actually understand each other better. We get to see the full picture instead of sampling from it.
The Pattern Hunter changes your cognitive role. Without it, you’re the analyst: reading, highlighting, tagging, sorting, trying to hold a hundred data points in a brain that can manage seven. With it, you’re the editor. The AI does the heavy processing. Your job shifts from finding the pattern to evaluating it. Is this real? Does it matter? What does it mean for what we do next?
Those are the questions that require human judgment. And they’re the questions you can only engage with properly when you haven’t just spent four hours reading transcripts.



