Introducing Impact Analysis: Our Framework for Finding AI Use Cases That Actually Ship
Author
Isaac Grau
Date Published

Most enterprise AI engagements start with the wrong question.
A client comes to us and says: "We want to use AI. Where should we start?" The lazy answer is to pull up a slide of trending use cases — customer service chatbots, document summarization, predictive analytics — and ask which one they like best. The honest answer is that we have no idea yet. And neither do they.
This is why we built Impact Analysis.
The problem with use-case-first thinking
When teams choose AI use cases without doing real discovery, three things tend to happen. They pick the most visible problem rather than the most valuable one. They underestimate the integration work, because nobody mapped the systems involved. And they ship pilots that work in isolation but never reach production, because the people who'd actually use the tool weren't in the conversation.
The output looks like progress. It rarely is.
We've watched this pattern enough times to be convinced of one thing: the bottleneck in enterprise AI isn't model capability. It's diagnosis.
What Impact Analysis is
Impact Analysis is a three-session discovery framework. Each session has a specific goal, a specific energy, and a specific output. The sessions build on each other in a deliberate order — and changing that order breaks the method.
Session 1 is about context and trust. We want to understand how the business actually works, not how the org chart says it works. We ask open questions. We listen for the things people complain about casually, not just the things they flag as priorities. We do not challenge anything. The goal isn't to be right; it's to be invited back.
Session 2 is about depth. Now that we have context, we drill into specific processes — the ones our signal detection flagged as high-potential during Session 1. We ask harder questions. We test assumptions. We start sketching how AI would change the shape of the work, and we get explicit about constraints: data access, regulatory boundaries, change-management appetite.
Session 3 is about validation and prioritization. We come back with a structured map: candidate use cases, estimated impact, implementation difficulty, dependencies. We argue about it together. The deliverable isn't a recommendation we drop on the client's desk. It's a shared decision the client owns.
Why three sessions and not one
We've tried compressing it. It doesn't work, and the reason is mechanical.
In a single session, you can only ask the questions you came in with. In the gap between Session 1 and Session 2, our team reviews the transcripts, identifies signals, and rewrites the question battery for the specific business in front of us. By Session 2, we're asking questions that wouldn't have made sense before we listened. By Session 3, we're proposing solutions calibrated to the real constraints of this specific company — not the generic ones from our last engagement.
The space between sessions is where most of the value is created. Compressing the timeline removes the thinking.
What we're listening for
Underneath the surface conversation, we're tracking specific signals. High-volume manual processes. Decision points where someone holds context that isn't documented. Systems that don't talk to each other. Reports that get built repeatedly from scratch. Approvals that bottleneck on a single person.
Each of these is a fingerprint of a potential AI workflow. None of them require asking "where should we use AI?" They require asking "walk me through how this actually happens."
Trust as a precondition
The single most important rule of Impact Analysis is this: in Session 1, we don't challenge the client. Not even gently. Not even when we strongly suspect they're describing a process that is broken in obvious ways.
This sounds like a soft skill. It isn't. It's the difference between a client who tells you the real story in Session 2 and one who gives you the sanitized version forever.
If the first time we meet someone, our job is to interrogate their decisions, we will get exactly the depth of information that an interrogation produces — defensive, partial, and shaped to make the speaker look reasonable. That's not the input we need. We need the unedited version of how the work happens, including the workarounds, the politics, and the things that "shouldn't" be happening but are.
You earn that by listening first.
Who this is for
Impact Analysis is built for mid-to-large companies where AI could meaningfully reshape operations but where nobody internally has the time or the methodology to figure out where to start. It works best when the sponsor is operationally credible — a COO, a head of operations, a transformation lead. It works less well as a top-down strategic exercise disconnected from the people doing the work.
We don't run Impact Analysis for companies that already know what they want to build. If a client has a specific use case validated and they need execution, we go straight to building. The framework is for the harder problem: figuring out what to build before deciding how.
What you walk away with
At the end of three sessions, the client has three things they didn't have before. A documented map of their operational reality at the level of detail required to design AI workflows around it. A prioritized portfolio of candidate use cases with honest impact and difficulty estimates. And a shared understanding — across the executive sponsor, the operational owners, and our team — about which problems are worth solving first.
The output is a decision the company can act on, not a deck they file away.
The unglamorous truth
The bet behind Impact Analysis is that the boring parts of consulting — listening carefully, structuring information, prioritizing honestly — are the parts that determine whether AI projects succeed. The model is rarely the bottleneck. The diagnosis is.
We've made this our methodology because we've seen what happens when the diagnosis is skipped. Pilots that demo well and ship never. Engagements that produce more slides than software. Teams that get burned on AI and pull back from it for years.
We'd rather spend three sessions getting the question right than three months building the wrong answer.