Every engagement follows a disciplined sequence — from listening to shipping to repeating. Not a framework that gets handed off. A process I run alongside the team until the outcomes are real.
Before forming any opinions, I spend time understanding the business by listening. Stakeholder interviews across product, tech, sales, support, and operations. Customer conversations. Wherever the real signal lives. This phase takes roughly 30 days and has one rule: no recommendations until the listening is done.
The headwinds, the gaps, the things quietly going wrong — they surface in conversations, not in dashboards. This is where I earn the right to have an opinion.
Discovery surfaces themes. I organize those themes into a living opportunity map — clusters of related bets tied directly to business outcomes. It's not a finished artifact and it's not meant to be. It moves as we learn. But it gives everyone a shared picture of where to play, in what order, and why.
Each cluster becomes a bucket of work: known issues, opportunities, and hypotheses about what will move the needle. These buckets are the raw material for everything that follows.
The opportunity map becomes a force-ranked set of hypotheses and experiments. Cross-functional exercises, data deep dives, customer ticket analysis, and competitive benchmarking all feed in. The output isn't a feature list — it's a learning agenda. Prioritized by impact and level of effort, not by what's easiest to build.
Every item on the roadmap has a hypothesis attached to it. If we can't articulate what we expect to happen and why, it doesn't belong on the list yet.
Once the roadmap is set, we plan how to actually deliver it. Cross-functional teams align on what's genuinely possible — not what looks good in a presentation. Dependencies get surfaced. Blockers get named. Timelines and release definitions get set before anyone writes a line of code.
The plan is a living document. It reshapes as new information arrives. Features move based on learnings. The goal is a scope that's honest about constraints and committed to milestones that mean something.
Every rollout has a metric. I build reporting cadences that keep the full organization — stakeholders and executives included — current on how each experiment is tracking. Dashboards, weekly reports, steering committees, and demos. Transparency is non-negotiable because accountability requires it.
Monitoring isn't passive. It's how we decide what to do next. If the numbers are moving, we understand why. If they're not, we know that too — and we make a decision rather than letting the experiment drift.
When the experiment concludes, I make a data-informed call. Iterate based on what we learned and start the validation cycle again. Productionalize into a full rollout with hardened code and incremental expansion. Or kill it, deprecate the feature, and redirect the team's energy to what's next.
All three are valid outcomes. The only bad outcome is letting an inconclusive experiment linger on the roadmap because nobody wanted to make the call.
The process is designed to repeat. Once the machinery is in place — the reporting, the cadences, the prioritization framework — it runs as a quarterly exercise. The heavy lifting of initial setup only happens once. After that, the work is communication, data analysis, and staying honest about what's actually working.
Organizations that compound don't reinvent their process every year. They refine it. This framework is built to get faster, more accurate, and more aligned the longer it runs.
Every engagement starts the same way: 30 minutes, no deck, just an honest conversation about where you are.