From c04147abe4ba0ad4bb2f04c2dcbace3fd9319692 Mon Sep 17 00:00:00 2001 From: Anot Date: Fri, 20 Mar 2026 10:09:46 -0400 Subject: [PATCH 1/3] feat(management-consulting): Add execution & transformation skills Six skills for the execution phase: implementation planning, due diligence, change management, process excellence, org design, and project governance. Co-Authored-By: Claude Opus 4.6 (1M context) --- .../skills/change-management/SKILL.md | 430 ++++++++++++ .../skills/due-diligence/SKILL.md | 610 +++++++++++++++++ .../skills/implementation-planning/SKILL.md | 644 ++++++++++++++++++ .../skills/org-design/SKILL.md | 411 +++++++++++ .../skills/process-excellence/SKILL.md | 436 ++++++++++++ .../skills/project-governance/SKILL.md | 536 +++++++++++++++ 6 files changed, 3067 insertions(+) create mode 100644 management-consulting/skills/change-management/SKILL.md create mode 100644 management-consulting/skills/due-diligence/SKILL.md create mode 100644 management-consulting/skills/implementation-planning/SKILL.md create mode 100644 management-consulting/skills/org-design/SKILL.md create mode 100644 management-consulting/skills/process-excellence/SKILL.md create mode 100644 management-consulting/skills/project-governance/SKILL.md diff --git a/management-consulting/skills/change-management/SKILL.md b/management-consulting/skills/change-management/SKILL.md new file mode 100644 index 00000000..5ff8ce82 --- /dev/null +++ b/management-consulting/skills/change-management/SKILL.md @@ -0,0 +1,430 @@ +--- +name: change-management +description: Design and execute organizational change management programs. Covers transformation planning, stakeholder engagement, resistance management, communication strategy, training design, and adoption measurement. Use when planning any organizational change, from digital transformations to culture shifts. +--- + +# Change Management + +Apply change management methodology to plan, execute, and sustain organizational change. This covers the full lifecycle: building the case for change, designing the approach, managing resistance, enabling adoption, and sustaining new ways of working. + +## Before You Begin + +If the user hasn't provided the following, ask for them rather than inventing details: +- What is the change initiative? What's actually changing (system, process, structure, culture)? +- How many people are affected, and which stakeholder groups? +- Has there been a prior attempt at this change, and if so, what happened? +- Present any team sizing ratios, budget percentages, or adoption benchmarks as general guidelines, not facts about the user's organization. When citing numbers the user didn't provide, flag them: "I'm using a typical benchmark of X based on [source]. Please confirm or provide your actual figure." + +## Choosing the Right Approach + +| Situation | Primary Approach | Supporting Elements | +|-----------|-----------------|---------------------| +| Major organizational transformation | Transformation Execution (5-phase) | Bridges' Transition Model, organizational alignment | +| Individual behavior change focus | Bridges' Transition Model | Behavioral analysis | +| Digital transformation | Combined approach | Digital maturity assessment | +| AI adoption | Bridges' Transition Model + Technology enablement | Agile change methods | +| Merger integration | Transformation Execution + Culture alignment | Organizational diagnostics | +| Culture change | Transformation Execution | Values-based change | +| Second attempt after prior failure | Transformation Execution + Trust rebuild | Explicit failure acknowledgment | + +## Understanding Individual Transitions + +People don't resist change. They resist loss. William Bridges' Transition Model (published in *Managing Transitions*) frames individual adoption not as a linear capability build but as a psychological journey through three phases: + +**Phase 1: Ending (Letting Go)** +Before anyone can adopt the new way, they must process what they're losing. This could be competence ("I was good at the old system"), identity ("I'm a partner, not a managing director"), relationships ("my team is being restructured"), or certainty ("I knew how things worked"). Until the loss is acknowledged, people are not available for the new beginning. Diagnostic: Are people talking about what they're losing? If the loss is unspoken, it will manifest as resistance. + +**Phase 2: Neutral Zone (The In-Between)** +The old way is gone but the new way isn't working yet. This is the most dangerous phase: productivity drops, anxiety rises, people feel incompetent, and the temptation to revert is strongest. It's also where the most creativity and adaptation happen, if managed well. Diagnostic: Are people feeling lost, confused, or overwhelmed? That's normal in the Neutral Zone. The response is structure (clear expectations, short-term milestones, visible support) and patience (don't panic at the productivity dip). + +**Phase 3: New Beginning** +People internalize the new identity, build new competencies, and start seeing the benefits. This doesn't happen on a schedule. Different people arrive here at different speeds. Diagnostic: Are people solving problems within the new model rather than working around it? Are they teaching others? That's the New Beginning. + +**Why this matters for your change plan:** Most change programs focus entirely on the external change (new system, new structure, new process) and ignore the internal transition (what people are feeling). The Bridges model says: manage the transition, not just the change. The external change can happen on Day 1. The internal transition takes months. + +### Worked Example: ERP Rollout at a 2,000-Person Manufacturer + +A manufacturer is replacing its legacy ERP with SAP S/4HANA. 800 employees across finance, supply chain, and operations are directly affected. Here's how the three transition phases played out: + +**Ending (Months 1-2)**: Supply chain staff grieved the loss of their familiar system. They were competent in the old ERP and feared looking incompetent in front of peers. Warehouse managers who had built custom workarounds over 10 years saw their expertise devalued. Intervention: acknowledge what's being lost. VP Operations held small-group sessions (not town halls) where people could voice concerns. The message wasn't "the old system was bad" but "the old system served us well, and here's why we need to move forward." Separately, CEO committed in writing to no layoffs for 18 months post go-live, removing the existential fear. + +**Neutral Zone (Months 2-5)**: The hardest phase. Month-end close took 40% longer in the first cycle. Finance team panicked. Warehouse staff reverted to paper-based workarounds. Productivity dipped across all affected functions. Interventions: role-specific training in sandbox environments 2 weeks before go-live (not 8 weeks before, which was too early). "Floor walker" support for the first 3 weeks. Daily 15-minute huddles during the second month-end close. Paired experienced SAP users with struggling staff. Created "cheat sheets" for the 10 most common transactions. The key: treated the productivity dip as expected and temporary, not as evidence that the change was failing. + +**New Beginning (Months 5-9)**: By the third month-end close, cycle time was back to baseline. Supply chain teams started discovering capabilities the old system didn't have (real-time inventory visibility, automated PO matching). Some teams began teaching others. Dashboard showing adoption metrics by team created healthy competition. Legacy system access was removed at Month 6 (forcing function, but only after the Neutral Zone had been navigated). By Month 9, 94% process compliance and the first teams were requesting additional S/4HANA capabilities beyond the original scope. + +The key lesson: the initial rollout plan treated all 800 employees as one audience and front-loaded training. The fix was diagnosing each stage per stakeholder group and intervening where the actual gap was, not where the plan assumed it would be. + +## How Individual Adoption Maps to Transformation Phases + +The adoption diagnostic operates at the person/group level. The Transformation Execution Framework operates at the program level. They connect like this: + +| Transformation Phase | Primary Adoption Stages Addressed | Why | +|---------------------|-----------------------------------|-----| +| Phase 1: Mobilize | Awareness, early Willingness | Building urgency creates awareness. Coalition building seeds willingness among leaders. | +| Phase 2: Envision and Communicate | Awareness, Willingness | Vision communication drives awareness. Connecting to "what's in it for me" drives willingness. | +| Phase 3: Enable and Quick Wins | Readiness, early Execution | Training builds readiness. Quick wins provide execution evidence and reinforce willingness. | +| Phase 4: Accelerate | Execution | Scaling adoption deepens execution capability across the organization. | +| Phase 5: Embed | Persistence | Anchoring in culture and systems sustains new behaviors permanently. | + +Use this mapping to diagnose program-level problems. If Phase 4 is stalling, check Execution scores across stakeholder groups. If Phase 2 communications aren't landing, check whether Awareness or Willingness is the actual gap (they require different interventions). + +## Transformation Execution Framework + +(Drawing on Kotter's research on organizational change.) + +For large-scale organizational transformation, work through five phases from mobilization through embedding. + +### Phase 1: Mobilize + +Build urgency and assemble a guiding coalition with the power and credibility to lead. + +**Urgency building**: Use market pressure, competitive threats, internal opportunities, and customer data to make the case for change. Complacency is the biggest barrier at this stage. + +**Guiding coalition**: Identify people with position power, expertise, credibility, and leadership ability. The coalition needs enough influence to drive change without relying solely on formal authority. Include: +- Executive champion (visible sponsorship) +- Change leader (operational leadership) +- Technical leads (credibility on feasibility) +- Cultural leaders (informal influence networks) + +Coalition activities: regular alignment meetings, capability building within the coalition itself, shared vision development. + +### Phase 2: Envision and Communicate + +Develop a compelling vision and ensure every stakeholder understands it. + +A useful vision is specific enough to guide decisions. "World-class operations" is not a vision. "Same-day order fulfillment at 99.5% accuracy" is. + +**Vision components**: What is changing, why it matters, how the organization will get there, and when key milestones will land. + +**Communication principles**: +- Consistent messaging across all channels +- Two-way dialogue, not just broadcast +- Leaders model the change they communicate +- Multiple formats for different learning styles +- Repetition matters: people typically need to hear a message multiple times before it lands + +Channels to plan: town halls, team meetings, leadership updates, newsletters, intranet, video, informal conversations. + +### Phase 3: Enable and Deliver Quick Wins + +Remove barriers, build capability, and generate visible early successes. + +**Barrier removal**: Identify and address structural misalignment, skills gaps, legacy systems, leadership resistance, and incentive misalignment. If the performance system still rewards old behaviors, the new ones won't take hold. + +**Quick wins**: Select wins that are visible to many, clearly tied to the change, and unambiguously successful. Quick wins build credibility and momentum. They also give skeptics evidence that the change is real. + +### Phase 4: Accelerate + +Use credibility from early wins to expand adoption and deepen change. + +- Expand the change team and promote change agents +- Update policies, processes, and systems to align with new ways of working +- Develop advanced skills and capabilities +- Deepen stakeholder engagement across the organization +- Scale successful pilots to broader teams + +This is where many change programs stall. Early wins create a false sense of completion. The hard work of systemic alignment happens here. + +### Phase 5: Embed in Culture + +Make changes permanent by anchoring them in organizational culture and operations. + +**Embedding activities**: +- Align leadership selection and development with new values +- Update systems, processes, and incentives to reinforce new behaviors +- Integrate changes into onboarding and performance management +- Celebrate and recognize sustained adoption + +**Culture indicators that embedding has worked**: +- New behaviors are the default, not the exception +- Organizational stories and narratives reflect the change +- Hiring and promotion criteria reflect new values +- Performance metrics track new outcomes + +## Change Readiness Assessment + +Before designing a change approach, assess organizational readiness. Score each dimension 1-5 for each stakeholder group. + +### Readiness Scorecard + +| Dimension | Assessment Questions | Score (1-5) | Risk Level | +|-----------|---------------------|-------------|------------| +| Leadership alignment | Do senior leaders agree on the need, vision, and approach? | _ | Low (4-5) / Med (3) / High (1-2) | +| Change history | Have past changes been well-managed? Do people trust the organization to change successfully? | _ | Low (4-5) / Med (3) / High (1-2) | +| Current change load | How many other changes are people absorbing right now? (See saturation assessment below) | _ | Low (4-5) / Med (3) / High (1-2) | +| Organizational capacity | Does the organization have the skills, resources, and bandwidth for this change? | _ | Low (4-5) / Med (3) / High (1-2) | +| Culture compatibility | Does the organizational culture support or resist the type of change being proposed? | _ | Low (4-5) / Med (3) / High (1-2) | +| Urgency | Do people feel genuine urgency, or is this perceived as optional? | _ | Low (4-5) / Med (3) / High (1-2) | + +**Interpreting the scorecard**: +- Average 4.0+: High readiness. Standard change approach. Focus on speed. +- Average 3.0-3.9: Moderate readiness. Invest in readiness-building before launching. Address lowest-scoring dimensions first. +- Average below 3.0: Low readiness. Don't launch until you've moved at least 2 dimensions above 3. Consider smaller initial scope or phased approach. +- Any single dimension at 1: Treat as a blocker. A perfect score on every other dimension won't compensate for a 1 in leadership alignment or change history. + +### Change Impact Assessment + +**Change profile dimensions**: +- Scope: enterprise, functional, or team-level +- Depth: process change, tool change, behavior change, or culture change +- Number of people affected +- Urgency: how quickly does this need to happen + +**Stakeholder readiness**: For each affected group, assess impact level, change complexity, and current readiness (ready, resistant, or uncertain). + +## Change Saturation Assessment + +Change saturation is the cumulative burden of concurrent changes on a population. It's the most common undiagnosed cause of change failure. A well-designed change program will still fail if people are already overwhelmed. + +### Measuring Saturation + +For each affected group, map all active and planned changes: + +| Change Initiative | Affected Population | Phase | Impact Level (H/M/L) | Start | End | +|-------------------|---------------------|-------|----------------------|-------|-----| +| ERP implementation | Finance, Operations | Active | High | Q1 | Q3 | +| Office relocation | All staff | Planning | Medium | Q2 | Q2 | +| New performance management system | All managers | Active | Medium | Q1 | Q4 | +| Restructuring (your initiative) | Operations | Planning | High | Q3 | Q1+1 | + +**Saturation indicators**: +- More than 2 high-impact changes hitting the same group simultaneously: red flag +- Cumulative change load exceeding 3 active initiatives per group: performance degradation likely +- Back-to-back changes with no stabilization period: adoption of both will suffer + +**Response options when saturation is high**: +- Sequence: delay your change until a current initiative completes +- Consolidate: combine change programs that affect the same population (joint communications, shared training) +- Reduce scope: phase the change to limit concurrent impact +- Increase support: more champions, more coaching, more time for adoption +- Defer non-essential changes: negotiate with other program owners to create breathing room + +## Change Team Sizing and Resourcing + +### Team Structure and Ratios + +| Role | Ratio | Notes | +|------|-------|-------| +| Change manager | 1 per 500-1,000 affected employees | Full-time dedicated. For complex changes (culture, multiple systems), use the lower end. | +| Change champions | 1 per 30-50 affected employees | Part-time (10-20% of time). Selected for credibility, not seniority. | +| Executive sponsor | 1 per program | Must dedicate 2-4 hours/week minimum. A sponsor who delegates everything isn't sponsoring. | +| Training lead | 1 per major system/process change | Owns curriculum design, delivery coordination, proficiency measurement. | +| Communications lead | 1 per program | Owns messaging calendar, channel strategy, feedback loops. | + +### Budget Allocation + +Change management budget as a percentage of total project cost: + +| Change Type | Budget % | Rationale | +|-------------|----------|-----------| +| Technology implementation | 15-20% of project budget | Training, communications, adoption support | +| Process redesign | 10-15% of project budget | Lighter technology component, heavier behavioral | +| Organizational restructuring | 20-25% of project budget | High emotional impact, retention risk, culture work | +| Culture change | 25-30% of program budget | Long timeline, deep behavioral shift, sustained reinforcement | +| Digital/AI transformation | 15-25% of project budget | Skill building, fear management, ongoing adaptation | + +These ratios assume programs targeting high adoption (>80%). If the target is "good enough" adoption for a compliance-driven change or a mandated system cutover, budgets can sit at the lower end. The percentages also skew higher for geographically dispersed or multi-language populations. + +Under-resourced change programs (below 10% of project budget) correlate with adoption rates below 60% and extended timelines. + +## Stakeholder Analysis and Management + +### Stakeholder Mapping + +Map stakeholders on two dimensions: influence (ability to affect outcomes) and interest (degree to which they're affected). + +| Category | Definition | Approach | +|----------|------------|----------| +| Manage Closely | High influence, high interest | Engage deeply, co-create solutions | +| Keep Satisfied | High influence, low interest | Keep informed, don't overwhelm | +| Keep Informed | Low influence, high interest | Communicate regularly | +| Monitor | Low influence, low interest | Minimal effort | + +For each stakeholder or group, track: current state (awareness, support level), desired state, and the strategy to move them. + +### Change Champion Network + +Change champions extend the reach of the change program beyond what the core team can cover directly. They operate at peer level, which gives them credibility that senior leadership often lacks. + +Champion responsibilities: +- Model new behaviors and processes +- Provide peer-level coaching and support +- Collect feedback and surface concerns early +- Celebrate wins within their teams + +Champions need investment: training, materials, time allocation, and visible recognition from leadership. + +**Selecting champions**: Don't just pick enthusiasts. The best champions are respected operators who are initially skeptical but persuadable. A converted skeptic is far more credible than a natural enthusiast. Avoid selecting only managers; include individual contributors with informal influence. + +## Resistance Management + +Resistance is information. It tells you what you haven't addressed yet. + +### Common Sources of Resistance + +| Source | Signs | Response | +|--------|-------|----------| +| Fear of job loss | Withdrawal, negativity | Transparent communication, reskilling commitments | +| Lack of skills | Reluctance, anxiety | Training, coaching, support | +| Habit and comfort | Old behaviors persist | Practice opportunities, reminders, nudges | +| Loss of status or influence | Risk aversion, undermining | New roles, recognition, involvement | +| Lack of trust | Skepticism, cynicism | Consistent follow-through, credibility building | +| Prior change failure | "We tried this before" | Explicit acknowledgment, differentiated approach (see below) | + +### Resistance Response Playbook + +| Signal | Indicator | Response | Escalation | +|--------|-----------|----------|------------| +| Vocal opposition | Direct pushback in meetings | Acknowledge concern, invite 1:1, address root cause | If persistent: sponsor intervention | +| Passive non-compliance | Low adoption metrics | Targeted coaching, peer support, identify barriers | If systemic: adjust approach | +| Shadow processes | Teams using old methods | Understand why, address gaps in new process | If widespread: pause and fix | +| Influencer resistance | Key person undermining | Private conversation, address concerns, co-create solution | If unresolved: sponsor engagement | +| Malicious compliance | Following the letter but not the spirit | Investigate whether the process has genuine gaps | If intentional: direct conversation about commitment | + +## Managing Second-Attempt Changes + +Second-attempt programs (where a prior initiative addressing similar goals failed or was abandoned) are common and require distinct tactics. The biggest obstacle isn't the change itself; it's the scar tissue from the last attempt. + +### Why Second Attempts Fail Differently + +First attempts fail on execution. Second attempts fail on trust. The population remembers the disruption, the broken promises, and the eventual quiet abandonment. "This time it's different" is the least convincing sentence in change management. + +### Second-Attempt Playbook + +**1. Acknowledge the failure explicitly.** Don't pretend the prior attempt didn't happen. Name it. "In 2023 we rolled out [X]. It didn't achieve what we intended. Here's what we've learned." Vague references ("lessons learned from past initiatives") signal that leadership hasn't actually reckoned with what went wrong. + +**2. Diagnose the prior failure honestly.** Was it scope? Resourcing? Timing? Leadership commitment? Technology? Identify the actual root cause and explain (credibly) what's different this time. If nothing structural has changed, don't proceed. + +**3. Earn credibility through early evidence.** Don't ask for trust upfront. Instead, demonstrate competence through small, visible actions before asking for broader commitment. A successful pilot carries more weight than a compelling presentation when trust is low. + +**4. Engage prior skeptics early.** The people who said "this won't work" last time were right. Invite them in. Their credibility with the broader population is high precisely because they were vindicated. Converting them (or at least neutralizing their opposition) is higher-leverage than rallying existing supporters. + +**5. Build in explicit "we'll stop if" criteria.** Give people an off-ramp. Publish clear criteria under which the program would be paused or modified. This counteracts the "they'll push this through regardless" cynicism that poisons second attempts. + +## Communication Strategy + +### Communication Planning + +Design communications by audience, not by channel. Each audience needs a core message, appropriate channel, defined frequency, and clear ownership. + +**Communication timeline phases**: +- Announce: the "why" and the vision (all staff, town hall format) +- Educate: the details and what it means for each group (affected teams, training format) +- Reinforce: progress updates and course corrections (all staff, ongoing) + +**Communication principles**: +- Over-communicate. If you think you've communicated enough, you haven't +- Two-way channels matter more than broadcast +- Managers are the most trusted communication channel for their teams +- Connect every communication to "what's in it for me" + +### AI and Digital Change Communication + +For AI and digital transformations, communication must explicitly address: +- Job security concerns (proactively, not reactively) +- How AI augments rather than replaces human work +- Investment in learning and development +- How employee input shapes the AI strategy +- Transparency about AI use and its limitations +- Human judgment remaining central to decisions + +Common resistance patterns specific to digital/AI transformations: +- "The system will make mistakes and I'll be blamed" (address accountability) +- "I'll become less valuable if anyone can do my job with a tool" (reframe as skill evolution, not replacement) +- Data migration anxiety: people fear losing historical information or context (validate these fears, they're often justified) +- Tool fatigue from prior digital rollouts that were abandoned (treat as second-attempt change) + +## Training and Enablement + +### Training Design + +Assess current vs. target proficiency for each skill area and each affected group. Prioritize training by gap size and business impact. + +**Training approaches**: +- Personalized learning paths based on role and current skill level +- Microlearning modules for just-in-time application +- Hands-on practice in sandbox environments (especially for technology changes) +- Peer learning networks and communities of practice +- Coaching for leaders who must role-model new behaviors + +### Training Timing + +Don't train too early (people forget) or too late (people feel abandoned). The sweet spot is close enough to go-live that skills transfer to daily work, with refresher sessions in the weeks that follow. + +## Measuring ROI of the Change Workstream + +Clients (and CFOs) routinely ask whether the change management workstream itself is worth the investment. Have a measurement approach ready. + +**Change workstream ROI formula**: Compare the cost of the change program against the delta in adoption outcomes. The counterfactual is projects of similar scale and complexity without dedicated change management. + +| Metric | How to Measure | What It Shows | +|--------|---------------|---------------| +| Adoption speed | Time to reach target adoption rate vs. comparable projects without CM | Faster adoption = earlier benefit realization | +| Adoption ceiling | Peak adoption rate achieved | Programs without CM typically plateau 15-25 points lower | +| Productivity dip duration | Weeks of reduced productivity during transition | Dedicated CM shortens the dip by 30-50% in benchmarked programs | +| Rework and support cost | Help desk tickets, workarounds, errors in first 90 days | Lower post-go-live support cost offsets CM investment | +| Employee retention | Voluntary turnover during and after the change | Poorly managed change drives attrition; replacement costs dwarf CM budgets | + +**Practical approach**: Baseline the expected adoption curve without dedicated CM (use the organization's own history or industry benchmarks). Track the actual curve with CM. The area between the two curves, multiplied by the per-day value of full adoption, is the ROI. Present this at steering committee quarterly. + +If the client has no prior benchmarks, use published data: Prosci's research consistently shows projects with excellent change management are 6x more likely to meet objectives than those with poor CM. Frame the CM budget as insurance on the project investment. + +## Adoption Measurement + +### Leading Indicators (predict future success) + +- Percentage of employees aware of the change +- Percentage of managers prepared to lead the change +- Training completion rate +- Change saturation score (see Change Saturation Assessment section) + +### Lagging Indicators (confirm success) + +- Adoption rate (actual usage of new tools, processes, or behaviors) +- Proficiency level (quality of performance in the new way) +- Business outcome achieved (the reason you did this in the first place) +- Stakeholder satisfaction + +### Pulse Survey Questions + +Use short, frequent surveys to take the temperature: +1. I understand why this change is happening +2. I have the tools and training I need to be successful +3. My manager supports me through this change +4. I can see the benefits of the new way of working +5. I know where to go for help + +### Reinforcement Mechanisms + +| Mechanism | Timing | +|-----------|--------| +| Quick wins celebration | Weeks 2-4 | +| Public recognition of adopters | Ongoing | +| KPI alignment to new behaviors | Month 2+ | +| Coaching for struggling teams | Months 2-6 | +| Integration into standard operating procedures | Month 3+ | + +## Cross-Cultural Change Design + +Before designing a cross-cultural change program, assess these dimensions for each operating context: + +| Dimension | Questions to Ask | What the Answers Tell You | +|-----------|-----------------|--------------------------| +| Decision-making | How are major decisions made here? Who needs to be in the room? Is silence agreement, processing, or disagreement? | Determines whether you need pre-alignment sessions, formal consensus processes, or directive announcements | +| Communication | Do people raise concerns publicly or privately? Is direct feedback welcomed or seen as confrontational? | Shapes your communication channels and feedback mechanisms | +| Authority and hierarchy | How many levels does a message travel before reaching frontline workers? Can middle managers make decisions, or do they escalate everything? | Determines your change champion structure and escalation paths | +| Pace and urgency | Does the organization value speed or thoroughness? Is "good enough and fast" preferred over "perfect and slow"? | Sets realistic timelines and helps avoid imposing one culture's pace on another | +| Labor relations | Are there works councils, unions, or consultation requirements? What are the legal obligations? | Hard constraints that must be built into the plan from Day 1 | + +**Practical guidance**: Don't run one global playbook. Design the core change architecture centrally (vision, milestones, measurement), but localize the engagement model, communication channels, and training formats based on what the diagnostic questions reveal. Assign regional change leads who understand local norms. The biggest failure mode in global programs is assuming that what works at HQ will work everywhere. + +## Principles + +- Change management is a discipline, not an afterthought. Budget for it, staff it, and track it +- Every change is a people change first, then a process or technology change +- Resistance is natural. Plan for it, don't be surprised by it +- Executive sponsorship must be visible and consistent. A sponsor who delegates everything isn't sponsoring +- Communication is necessary but not sufficient. Training is necessary but not sufficient. Both together are still not enough without reinforcement +- You can't change people. You can only create conditions for them to change themselves +- Start early. Change takes longer than the project plan suggests +- Celebrate wins, but don't declare victory prematurely diff --git a/management-consulting/skills/due-diligence/SKILL.md b/management-consulting/skills/due-diligence/SKILL.md new file mode 100644 index 00000000..d4e82d39 --- /dev/null +++ b/management-consulting/skills/due-diligence/SKILL.md @@ -0,0 +1,610 @@ +--- +name: due-diligence +description: Conduct commercial, operational, and strategic due diligence for M&A, investment, or partnership decisions. Use when assessing acquisition targets, investment opportunities, vendor evaluations, or any situation requiring rigorous business assessment and risk identification. +--- + +# Due Diligence + +Assess business opportunities through rigorous analytical frameworks. This covers commercial, operational, financial, strategic, and technology due diligence, from scoping the engagement through risk synthesis and investment recommendation. + +## Before You Begin + +If the user hasn't provided key inputs, ask for them rather than fabricating deal details: +- What is the target company, its industry, and approximate size (revenue, headcount)? +- What is the transaction type (acquisition, PE investment, partnership, vendor assessment)? +- What financial data is available (actuals, management accounts, data room access)? +- Present assessment thresholds (e.g., customer concentration flags, cash conversion benchmarks) as general guidelines, not absolute rules. When using numbers the user didn't provide, flag them: "I'm assuming X based on [typical range for this deal type]. Please confirm or provide actuals." + +## DD Types and When to Use Them + +| DD Type | Core Question | Focus Areas | +|---------|---------------|-------------| +| Commercial | Can we win? | Market position, customers, growth, competitive dynamics | +| Operational | Can we run it? | Processes, systems, people, efficiency, scalability | +| Financial | Is it real? | Revenue quality, working capital, cash flow, projections | +| Strategic | Should we do it? | Strategic fit, synergies, integration, cultural compatibility | +| Technology & IP | Is it viable? | Architecture, technical debt, IP ownership, security | +| Legal & Regulatory | Is it clean? | Litigation, compliance, contracts, data privacy | + +Most transactions require at least commercial, operational, and financial DD. The mix depends on the deal. + +## Phase 1: Scope Definition + +Define the boundaries before doing any analysis. Unfocused DD wastes time and misses what matters. + +### Transaction Context + +Establish: +- **Transaction type**: Acquisition, PE investment, strategic partnership, vendor assessment, internal assessment +- **Target**: Company name, industry, size +- **Deal value**: Estimated range +- **Timeline**: How much time for DD +- **Access**: Data room contents, management availability, ability to speak with customers/suppliers +- **Team**: Who's doing the work, what expertise is available + +### Focus Area Prioritization + +| Area | Priority | Key Questions | Data Available? | +|------|----------|---------------|-----------------| +| Market | High/Med/Low | What must we understand about the market? | Y/N | +| Customers | High/Med/Low | What must we understand about the customer base? | Y/N | +| Operations | High/Med/Low | What must we understand about how the business runs? | Y/N | +| Financials | High/Med/Low | What must we validate about the numbers? | Y/N | +| Technology | High/Med/Low | What must we understand about the tech stack? | Y/N | +| Legal/Regulatory | High/Med/Low | What risks need legal review? | Y/N | + +Prioritize ruthlessly. Focus on what could kill the deal or materially change the price. + +## Phase 2: Information Gathering + +### Standard Information Request List + +**Corporate:** +- Articles of incorporation +- Board minutes (last 2 years) +- Organizational charts +- Shareholder agreements +- Material contracts and amendments + +**Financial:** +- Audited financials (3-5 years) +- Monthly management accounts (last 24 months) +- Revenue by segment, product, geography, customer +- Cash flow statements +- Debt schedules and covenant compliance +- Budget vs. actual analysis (last 2 years) +- Tax returns and outstanding tax positions + +**Commercial:** +- Customer list with revenue by customer (last 3 years) +- Contract templates and key customer contracts +- Pricing history and discount schedules +- Sales pipeline and win/loss data +- Customer churn data and reasons +- NPS or customer satisfaction data + +**Operational:** +- Process documentation for key workflows +- Technology systems inventory +- Key vendor list with spend and contract terms +- Headcount by function, level, tenure +- Capacity utilization data +- Quality metrics and incident history + +**Technology:** +- Architecture diagrams +- Technical debt assessment (if available) +- Security audit results +- IP portfolio (patents, trademarks, trade secrets) +- Open-source dependency audit +- Development team metrics (deploy frequency, incident response) + +**Legal:** +- Pending or threatened litigation +- Regulatory filings and compliance status +- Material contract summary +- Insurance policies +- Data privacy compliance documentation +- FCPA/anti-bribery compliance program documentation (see FCPA/Anti-Corruption section) +- ESG reports, sustainability commitments, and environmental liabilities (see ESG section) + +### Minimum Viable DD Request (Time-Constrained) + +When time is short (PE secondary, small bolt-on, compressed timeline), request these first. They cover 80% of deal-critical information in 20% of the volume. + +**Tier 1 (request immediately, review first)**: +- Last 2 years audited financials + trailing 12 months management accounts +- Revenue by customer (top 20 customers, 3 years) +- Customer churn/retention data +- Org chart + headcount by function +- Material contracts summary (top 10 by value) +- Pending litigation summary + +**Tier 2 (request immediately, review after Tier 1)**: +- Monthly revenue and gross margin detail (24 months) +- Sales pipeline and bookings data +- Key vendor contracts +- Technology architecture overview (1-pager) +- Cap table and shareholder agreements + +**Tier 3 (request if time permits or red flags emerge)**: +- Full data room contents per standard list above + +The goal is to identify deal-killers and major valuation issues within 5-7 business days. Anything that survives Tier 1 and Tier 2 review without red flags is likely worth the full DD investment. + +## Phase 3: Analysis + +### Commercial Due Diligence + +#### Market Assessment + +| Metric | Finding | Source | Confidence | +|--------|---------|--------|------------| +| Total addressable market (TAM) | $ | Industry reports, bottom-up analysis | H/M/L | +| Target's market share | % | Company data vs. market estimates | H/M/L | +| Market growth rate (CAGR) | % | Historical trend, analyst consensus | H/M/L | +| Market position | #X of Y competitors | Competitive analysis | H/M/L | + +Key questions: Is the market growing or shrinking? Is growth structural or cyclical? What disruption risks exist? How defensible is the target's position? + +#### Customer Analysis + +| Metric | Finding | Risk Level | Trend | +|--------|---------|------------|-------| +| Top 10 customer concentration | % of revenue | H/M/L | Improving/Stable/Worsening | +| Average contract value | $ | | Direction | +| Net revenue retention (NRR) | % | Above/Below 100% | Direction | +| Gross churn rate | % | vs. industry benchmark | Direction | +| Logo churn rate | % | Segment comparison | Direction | +| Average contract duration | months | vs. industry | Direction | + +Customer concentration above 20% in top 3 customers is a yellow flag; above 40% is a red flag, as a general rule of thumb, though significance depends on contract duration, switching costs, and industry norms. NRR below 100% means the installed base is shrinking, the business must sell faster than it leaks. + +#### Revenue Quality + +| Metric | Finding | Assessment | +|--------|---------|------------| +| Recurring vs. one-time revenue | % recurring | Strong (>80%) / Moderate (50-80%) / Weak (<50%) (illustrative bands; vary significantly by industry. SaaS businesses typically target >90% recurring; project-based businesses may have structurally lower recurring percentages.) | +| Revenue recognition risks | Assessment | H/M/L | +| Backlog / committed revenue | $ | Coverage ratio vs. plan | +| Pricing power | Assessment | Expanding / Stable / Eroding | +| Cross-sell / upsell as % of new ACV | % | Growing or declining | + +#### Competitive Position + +| Factor | Target | Comp A | Comp B | Assessment | +|--------|--------|--------|--------|------------| +| Market share | % | % | % | Position and trajectory | +| Pricing | $ | $ | $ | Premium / Par / Discount | +| Differentiation | Claim | Claim | Claim | Sustainable? | +| Win rate vs. competitors | % | — | — | Strong / Weak | + +#### Unit Economics (SaaS and Subscription Businesses) + +For recurring revenue businesses, unit economics are the most revealing lens on business quality. A growing SaaS company can look healthy on a P&L while burning cash on unprofitable customer acquisition. + +| Metric | Finding | Benchmark | Assessment | +|--------|---------|-----------|------------| +| Customer acquisition cost (CAC) | $ | Varies by segment | Blended and by channel | +| CAC payback period | months | <18 months (good), <12 months (strong) | Including gross margin | +| Lifetime value (LTV) | $ | Based on gross margin and churn | By segment if possible | +| LTV:CAC ratio | X:1 | >3:1 (healthy), >5:1 (strong or underinvesting) | By segment | +| Gross margin | % | >70% (SaaS), >50% (managed services) | By revenue type | +| Rule of 40 | Revenue growth % + EBITDA margin % | >40% (strong) | Trajectory matters more than snapshot | +| Burn multiple | Net burn / net new ARR | <1.5x (efficient), >2x (concern) | For pre-profit companies | +| Magic number | Net new ARR / prior quarter S&M spend | >0.75 (efficient), <0.5 (inefficient) | Sales efficiency indicator | + +**CAC calculation notes**: Fully loaded CAC includes sales and marketing salaries, commissions, marketing spend, sales tools, and allocated overhead. Many companies understate CAC by excluding components. Always ask for the build-up. + +**LTV calculation**: LTV = (Average revenue per account * Gross margin %) / Annual churn rate. Use logo churn for conservative estimate, revenue churn for optimistic. For businesses with strong expansion revenue, net revenue retention can substitute for the churn component: LTV = (ARPA * Gross margin %) / (1 - NRR). + +**LTV:CAC by segment**: Blended LTV:CAC can mask problems. Enterprise segment might be 5:1 while SMB is 1.5:1. If growth strategy depends on SMB expansion, the blended number is misleading. + +#### Cohort Analysis (SaaS and Subscription Businesses) + +Cohort analysis is the single most revealing analysis for subscription businesses. It shows the true behavior of customer groups over time, cutting through the aggregation that makes topline metrics look better than reality. + +**Revenue cohort analysis**: Group customers by signup quarter. For each cohort, track cumulative revenue retention at 3, 6, 12, 18, 24 months. + +| Cohort | Month 0 (ARR) | Month 6 | Month 12 | Month 18 | Month 24 | +|--------|--------------|---------|----------|----------|----------| +| Q1 2023 | $1.0M | 95% | 88% | 82% | 78% | +| Q2 2023 | $1.2M | 93% | 85% | 79% | — | +| Q3 2023 | $1.4M | 90% | 81% | — | — | +| Q4 2023 | $1.6M | 87% | — | — | — | + +What to look for: +- **Improving cohorts over time**: Later cohorts retaining better = product-market fit improving, or better customer targeting. Bullish. +- **Deteriorating cohorts**: Later cohorts retaining worse = possible growth at the expense of quality, or market saturation forcing the company downmarket. Bearish. +- **Revenue retention above 100%**: Expansion revenue exceeding churn within cohorts. Strong signal for pricing power and product stickiness. +- **Cliff patterns**: Sharp drop at a specific month (e.g., Month 12) often indicates annual contract non-renewals. Check contract terms. +- **Cohort shape divergence**: If early cohorts show a different retention curve than recent ones, something structural changed. Investigate. + +**Logo cohort analysis**: Same structure but tracking customer count instead of revenue. Divergence between logo and revenue cohorts reveals whether you're losing small customers (less concerning) or large ones (very concerning). + +### Operational Due Diligence + +#### Process and Efficiency + +| Area | Finding | Risk | Improvement Potential | +|------|---------|------|---------------------| +| Capacity utilization | % | H/M/L | Assessment | +| Key process bottlenecks | Findings | H/M/L | Assessment | +| Automation level | % | H/M/L | Assessment | +| Quality metrics | Findings | H/M/L | Assessment | + +#### Technology Assessment + +| Area | Finding | Risk | Detail | +|------|---------|------|--------| +| Architecture scalability | Assessment | H/M/L | Can it support 3-5x growth? | +| Technical debt | Quantified estimate | H/M/L | Remediation cost and timeline | +| IP ownership and protection | Status | H/M/L | Patents, trade secrets, licenses | +| Security posture | Assessment | H/M/L | Last audit, certifications, incidents | +| Data architecture | Findings | H/M/L | Quality, governance, portability | +| Open-source dependencies | Audit status | H/M/L | License compliance, security | +| Development velocity | Metrics | H/M/L | Deploy frequency, lead time, MTTR | +| Cloud infrastructure | Status | H/M/L | Provider, costs, lock-in risk | + +#### Management and Team Assessment + +| Dimension | Finding | Risk | Detail | +|-----------|---------|------|--------| +| Leadership depth | Assessment | H/M/L | Bench strength below C-suite | +| Key person dependencies | Names/roles | H/M/L | Single points of failure | +| Succession planning | Status | H/M/L | Documented plans, readiness | +| Track record | Performance history | H/M/L | Delivery on past commitments | +| Cultural assessment | Findings | H/M/L | Values, decision-making, adaptability | +| Retention risk | Assessment | H/M/L | Turnover trends, engagement, comp benchmarking | +| Organizational structure | Assessment | H/M/L | Efficiency, spans of control, layers | + +Management assessment often predicts post-deal success better than financial analysis. A mediocre business with a strong team outperforms a strong business with a mediocre team. + +#### Management Interview Guide + +Management interviews are where you test the narrative against reality. The data room tells you what happened; interviews reveal why, and whether leadership understands their own business. + +**Core questions (ask every management team)**: +1. Walk me through how you win a new customer, from first contact to signed contract. What's your typical sales cycle? (Tests: process maturity, self-awareness about go-to-market) +2. Which customers have you lost in the last 12 months, and why? (Tests: honesty, customer understanding. Red flag: "We don't really lose customers" when churn data says otherwise) +3. What are the 2-3 things that keep you up at night about this business? (Tests: self-awareness, strategic thinking. Red flag: "Nothing, business is great") +4. If you had an extra $5M to invest in the business, where would you put it? (Tests: growth understanding, capital allocation thinking) +5. Tell me about a time something went significantly wrong operationally. What happened and what did you change? (Tests: learning orientation, operational resilience) +6. Who are your key people, and what happens if any of them leave? (Tests: talent awareness, succession planning, key person risk) +7. How do you set prices, and when was the last time you raised them? (Tests: pricing sophistication, competitive position) + +**SaaS-specific questions**: +8. Walk me through your cohort economics. How has CAC payback evolved over the last 2 years? (Tests: unit economics understanding) +9. What percentage of your revenue comes from customers who've been with you more than 2 years? (Tests: retention quality) +10. What's your product roadmap, and how much of it is customer-requested vs. market-driven? (Tests: product strategy, customer dependency) + +**Triangulation technique**: Ask the same factual question of multiple team members separately. Compare answers. Consistent answers build confidence. Divergent answers on factual matters (market size, competitive position, churn reasons) indicate either poor internal communication or deliberate narrative management. + +**Red flags in management responses**: +- Inability to discuss unit economics or cohort metrics (for a data-driven business) +- Blaming external factors for all negative trends +- Defensive response to straightforward questions about customer losses or operational failures +- Answers that contradict data room documents +- Excessive preparation and scripting (often indicates PE coaching that may mask genuine understanding) +- "Trust me" or "You'll see when you talk to customers" in response to data requests + +### FCPA / Anti-Corruption Due Diligence + +Critical for any cross-border deal, and increasingly scrutinized in domestic transactions involving government-adjacent customers. + +| Area | What to Assess | Red Flags | +|------|---------------|-----------| +| Compliance program | Written anti-bribery policy, training records, reporting hotline, designated compliance officer | No written policy, no training records, compliance function reports to legal (conflict of interest) | +| Third-party intermediaries | Agents, distributors, consultants in high-risk jurisdictions; commission structures | Unusually high commissions (>15-20%), agents with no clear business rationale, payments to shell companies or personal accounts | +| Government touchpoints | Government customers, permits/licenses, state-owned enterprise relationships | Revenue concentration in government contracts without documented procurement processes | +| Payment patterns | Cash payments, payments to jurisdictions that don't match contract geography, split invoicing | Payments routed through intermediary jurisdictions, round-number payments with no supporting documentation | +| Historical issues | Prior investigations, self-disclosures, settlements | Any DOJ/SEC history is a major risk factor; successor liability can attach to acquirers | + +**Key principle**: Under the FCPA and UK Bribery Act, acquiring companies can inherit liability for the target's past conduct. DD is not optional for cross-border deals. If the target operates in Transparency International's bottom-third countries, FCPA DD should be Tier 1 priority. + +### ESG Due Diligence + +ESG DD has moved from "nice to have" to required for most institutional investors, many strategic acquirers, and any deal involving European targets (CSRD, EU Taxonomy). + +| Dimension | Key Questions | Valuation Impact | +|-----------|--------------|-----------------| +| Environmental | Emissions profile and reduction commitments? Environmental liabilities (contamination, remediation)? Carbon-intensive assets at risk of stranding? Regulatory exposure (carbon pricing, emissions caps)? | Remediation liabilities are direct price adjustments. Carbon-intensive assets may face accelerated depreciation. | +| Social | Labor practices and supply chain labor risk? DEI metrics and litigation history? Community relations and social license to operate? Health and safety record? | Labor violations create regulatory and reputational risk. Poor H&S records signal operational issues. | +| Governance | Board composition and independence? Executive compensation alignment with long-term value? Related-party transactions? Whistleblower mechanisms? | Governance gaps are often symptoms of deeper operational problems. | + +**Practical approach**: Don't boil the ocean. Focus ESG DD on (1) material financial exposures (environmental liabilities, pending regulation), (2) reputational risks that could affect customer retention or talent acquisition, and (3) alignment with the acquirer's own ESG commitments and reporting obligations. If the acquirer has net-zero commitments, the target's emissions profile directly affects the consolidated position. + +### Financial Due Diligence + +#### Quality of Earnings + +| Item | Reported | Adjusted | Adjustment Reason | +|------|---------|---------|-------------------| +| Revenue | $ | $ | Non-recurring items, timing differences | +| EBITDA | $ | $ | One-time costs, owner compensation, related-party transactions | +| Net income | $ | $ | Normalizing adjustments | + +The gap between reported and adjusted EBITDA tells you how much the seller is dressing up the numbers. Adjustments exceeding 20% of reported EBITDA warrant extra scrutiny, as a general guideline, though significance depends on the nature of adjustments and business type (owner-operated businesses routinely require larger adjustments). + +#### Organic Growth Isolation (Roll-Ups and Serial Acquirers) + +For businesses that have grown by acquisition, reported revenue growth is misleading. You need to separate organic growth from acquired growth. + +**Organic growth = same-store revenue growth / prior year same-store revenue.** "Same-store" means entities owned for the full comparable period in both years. + +| Metric | Calculation | What It Reveals | +|--------|------------|-----------------| +| Reported revenue growth | (Current year total revenue - Prior year total revenue) / Prior year total revenue | Total growth including acquisitions. Often impressive but uninformative. | +| Organic revenue growth | (Current year same-store revenue - Prior year same-store revenue) / Prior year same-store revenue | The underlying growth engine. This is what survives when acquisitions stop. | +| Acquisition contribution | Reported growth - Organic growth | How dependent the growth story is on M&A. | + +**If management can't provide the same-store split, flag it as a data quality concern.** This is a basic metric for any serial acquirer. Inability to produce it suggests either poor integration of financial systems or reluctance to show the organic number. Either is a problem. + +**Red flags in roll-up DD**: organic growth below industry average (the platform is buying growth, not creating it), declining organic growth over successive periods (integration is consuming management attention), and no margin improvement in acquired entities post-integration (no operational value-add from the platform). + +#### Working Capital + +| Component | Current | Trend | Seasonal Pattern | Cash Impact | +|-----------|---------|-------|-----------------|-------------| +| Accounts receivable | $ (X days) | Direction | Pattern | $ | +| Accounts payable | $ (X days) | Direction | Pattern | $ | +| Inventory | $ (X days) | Direction | Pattern | $ | +| Net working capital | $ | Direction | Pattern | Funding need | + +Working capital is where deals get renegotiated. Establish a normalized working capital figure and tie the purchase price to it. Seasonal businesses require month-by-month analysis. + +**Capital-intensive service businesses** (staffing, consulting, facilities management, outsourced services): The cash conversion cycle creates material risk that doesn't show up in standard working capital ratios. The core dynamic is weekly payroll out, 45-60 day receivables in. A fast-growing staffing firm can be profitable on paper while running out of cash. Model the payroll-to-collection gap explicitly, stress-test it against revenue growth scenarios, and assess whether existing credit facilities can fund the gap at scale. + +#### Capital Expenditure + +| Category | Historical (3-year avg) | Forecast | Maintenance vs. Growth | +|----------|------------------------|----------|----------------------| +| Category 1 | $/yr | $/yr | Split | +| Category 2 | $/yr | $/yr | Split | + +Distinguish maintenance capex (required to keep the business running) from growth capex (investment in expansion). Underinvestment in maintenance capex flatters short-term earnings but creates a liability. + +#### Cash Flow + +| Metric | Year -2 | Year -1 | Current | Trend | +|--------|---------|---------|---------|-------| +| Operating cash flow | $ | $ | $ | Direction | +| Free cash flow | $ | $ | $ | Direction | +| Cash conversion (FCF/EBITDA) | % | % | % | Direction | + +Cash conversion below 70% needs explanation, for established, asset-light businesses. Capital-intensive industries may have structurally lower conversion. Common culprits: growing working capital, high capex, or earnings quality issues. + +## Phase 4: Risk Assessment + +### Risk Categorization + +**Critical risks (deal killers)** — Issues that could make the deal unviable: + +| Risk | Likelihood | Impact | Mitigation | +|------|------------|--------|------------| +| Risk description | H/M/L | H/M/L | What can be done | + +Examples: undisclosed litigation, regulatory non-compliance, fraud indicators, irreplaceable key person with no retention plan, market in structural decline. + +**Major risks (deal adjustments)** — Issues that materially affect valuation or deal terms: + +| Risk | Likelihood | Impact | Mitigation | +|------|------------|--------|------------| +| Risk description | H/M/L | H/M/L | What can be done | + +Examples: customer concentration, technical debt requiring significant remediation, management gaps, integration complexity. + +**Minor risks (price adjustments)** — Issues that affect value but are manageable: + +| Risk | Likelihood | Impact | Mitigation | +|------|------------|--------|------------| +| Risk description | H/M/L | H/M/L | What can be done | + +Examples: operational inefficiencies (often upside opportunities), minor compliance gaps, below-market compensation structures. + +### Red Flag Indicators + +Watch for these, any one of them warrants deeper investigation: + +- Revenue acceleration in the run-up to sale (pulling revenue forward) +- Unusual changes in accounting policies or estimates +- Customer concentration increasing while being presented as "diversified" +- Key employees departing in the months before the transaction +- Capital expenditure declining while revenue grows (underinvestment) +- Working capital trends diverging from revenue trends +- Related-party transactions at non-market terms +- Gaps or inconsistencies between management presentations and data room documents +- Reluctance to provide access to customers or key employees + +Red flags are not necessarily deal killers. They're signals to investigate further. Sometimes the explanation is benign. Sometimes it changes the deal. + +## Phase 5: Synthesis and Recommendation + +### Investment Thesis + +Frame the deal in terms of: +1. **What makes this attractive** — the strategic rationale and value creation opportunity +2. **What could go wrong** — the key risks and their mitigations +3. **What the deal is worth** — implied valuation given the findings + +### Bridging DD Findings to Valuation + +DD findings should directly inform valuation methodology selection and multiple adjustments. Every material finding translates to a valuation impact. + +**How DD findings affect valuation approach**: + +| DD Finding | Valuation Impact | Mechanism | +|------------|-----------------|-----------| +| High revenue quality (>80% recurring, strong NRR) | Higher multiple | Supports premium to peer multiples. Revenue predictability reduces risk premium. | +| Customer concentration (>30% in top 3) | Lower multiple or earn-out structure | Discount to peers, or structure portion of consideration as earn-out tied to customer retention. | +| Strong unit economics (LTV:CAC >4:1) | Higher multiple | Validates growth investment. Each dollar of S&M spend generates predictable returns. | +| Deteriorating cohorts | Lower multiple | Recent customers are less valuable than historical averages suggest. Adjust revenue projections downward. | +| Technical debt requiring >$2M remediation | Direct price adjustment | Deduct estimated remediation cost from enterprise value. | +| Key person dependency (no succession plan) | Retention package + price adjustment | Cost of retention packages deducted from price. Remaining risk discounts the multiple. | +| Operational inefficiency (below-benchmark margins) | Can increase or decrease value | If acquirer can fix: upside (value creation). If structural: margin risk (discount). | +| Regulatory risk | Escrow or indemnity | Don't discount the price; structure protection through escrow, indemnity, or reps & warranties. | + +**Multiple adjustment framework**: Start with comparable company median multiple. Apply adjustments: + +| Factor | Adjustment | Rationale | +|--------|------------|-----------| +| Growth premium/discount | +/- 1-3x | Revenue growth vs. peer median | +| Margin premium/discount | +/- 0.5-1.5x | EBITDA margin vs. peer median | +| Revenue quality | +/- 0.5-2x | Recurring %, NRR, contract duration | +| Market position | +/- 0.5-1x | #1-2 vs. #4-5 in category | +| Customer risk | - 0.5-2x | Concentration, churn, cohort trends | +| Management quality | +/- 0.5-1x | Track record, depth, PE readiness | +| Net adjustment | Sum | Implied target multiple vs. peer median | + +### Recommendation Format + +``` +## Due Diligence Summary: [Target] + +### Investment Thesis +[One paragraph: why this deal makes sense or doesn't] + +### Key Strengths +1. [Strength with evidence] +2. [Strength with evidence] + +### Key Concerns +1. [Concern with evidence and mitigation] +2. [Concern with evidence and mitigation] + +### Risk Assessment +| Category | Risk Level | Key Risks | +|----------|------------|-----------| +| Commercial | H/M/L | [Risks] | +| Operational | H/M/L | [Risks] | +| Financial | H/M/L | [Risks] | +| Strategic | H/M/L | [Risks] | +| Technology | H/M/L | [Risks] | + +### Valuation Implications +| Factor | Adjustment | +|--------|------------| +| Revenue quality adjustments | +/-$ or % | +| Customer risk discount | -$ or % | +| Operational improvement upside | +$ or % | +| Integration costs | -$ | +| Net adjustment | $ or % | + +### Recommendation +[PROCEED / PROCEED WITH CONDITIONS / DO NOT PROCEED] + +### Conditions Precedent (if proceeding) +1. [Condition — rationale] +2. [Condition — rationale] + +### Next Steps +1. [Action — owner — timeline] +2. [Action — owner — timeline] +``` + +## Integration Assessment (M&A Context) + +When DD is for an acquisition, integration planning starts during DD, not after close. + +### Integration Complexity + +| Area | Complexity | Timeline | Key Dependencies | Cost Estimate | +|------|-----------|----------|-----------------|---------------| +| Systems integration | H/M/L | Months | Dependencies | $ | +| Organization integration | H/M/L | Months | Dependencies | $ | +| Customer migration | H/M/L | Months | Dependencies | $ | +| Process harmonization | H/M/L | Months | Dependencies | $ | +| Culture integration | H/M/L | Months | Dependencies | $ | + +### Synergy Quantification + +| Synergy | Type | Year 1 | Year 2 | Year 3 | Confidence | Risk | +|---------|------|--------|--------|--------|------------|------| +| Revenue synergy | Revenue | $ | $ | $ | H/M/L | Timing risk | +| Cost synergy 1 | Cost | $ | $ | $ | H/M/L | Execution risk | +| Cost synergy 2 | Cost | $ | $ | $ | H/M/L | Execution risk | + +Cost synergies are generally more reliable than revenue synergies. Revenue synergies take longer to materialize and depend on customer behavior you can't fully control. Revenue synergies typically warrant significant haircuts. The appropriate discount depends on synergy type, execution confidence, and acquirer track record. + +### Day 1 Readiness + +- Communication plan for employees, customers, vendors +- Interim operating model defined +- Key talent retention packages in place +- Regulatory approvals obtained +- IT systems access and continuity plan +- Customer-facing team briefed and scripted + +## PE Investment Context + +When conducting DD for a PE investment (buyout, growth equity, or add-on acquisition), the frame shifts from "should we buy this?" to "can we create value and exit profitably?" + +### 100-Day Plan Structure + +The 100-day plan bridges DD findings to value creation execution. Draft it during DD, refine it pre-close, execute from Day 1. + +**Days 1-30: Stabilize and Assess** +- Confirm DD findings with full data access (post-close you get everything) +- Retain key talent (sign retention agreements, clarify roles, address uncertainty) +- Establish reporting cadence and KPI dashboards +- Identify and address any "Day 1 surprises" (issues that weren't visible in DD) +- Quick wins: implement 2-3 low-effort improvements identified during DD + +**Days 31-60: Design and Prioritize** +- Finalize value creation plan with prioritized initiatives +- Size each initiative: revenue impact, cost impact, investment required, timeline +- Assign initiative owners and establish accountability structure +- Begin recruiting for identified management gaps +- Launch strategic pricing review (often the fastest path to margin improvement) + +**Days 61-100: Launch and Execute** +- Kick off top 3-5 value creation initiatives +- Establish board reporting format and governance rhythm +- Complete first monthly operating review with new KPI framework +- Validate or adjust year 1 financial plan based on first 60 days of actuals +- Finalize technology roadmap and investment timeline + +### Exit Scenario Analysis + +Model three exit scenarios at entry to establish return expectations and inform hold period strategy. + +| Scenario | Revenue at Exit | EBITDA at Exit | Exit Multiple | Enterprise Value | Equity Value | IRR | MOIC | +|----------|----------------|----------------|---------------|-----------------|-------------|-----|------| +| Bull | $ | $ | X.Xx | $ | $ | % | X.Xx | +| Base | $ | $ | X.Xx | $ | $ | % | X.Xx | +| Bear | $ | $ | X.Xx | $ | $ | % | X.Xx | + +**Key assumptions to stress-test**: +- Revenue growth rate: What if growth is 50% of plan? +- Margin expansion: What if operational improvements deliver half the expected margin gain? +- Exit multiple: What if multiples compress by 2-3 turns from entry? +- Hold period: What if exit takes 6-7 years instead of 4-5? + +**PE-specific DD questions**: +- Management incentive alignment: Is the team properly incentivized for a PE hold period? Do equity arrangements align with value creation and exit timeline? +- Add-on acquisition pipeline: Are there bolt-on targets that could accelerate growth or expand capabilities? +- PE readiness of the management team: Can they operate with PE-level reporting, governance, and performance expectations? +- Prior PE ownership: If the business has been PE-owned before, what was done and what's left to do? + +## Context Adaptation + +Adapt the DD approach based on the deal context: + +| Context | Emphasis | +|---------|----------| +| **M&A** | Synergy assessment, integration complexity, valuation adjustments, Day 1 readiness | +| **PE Investment** | Value creation levers, exit scenarios, management incentive alignment, 100-day plan | +| **Strategic Partnership** | Capability complementarity, cultural fit, governance model, IP sharing terms | +| **Vendor Assessment** | Operational reliability, financial stability, contractual protections, business continuity | +| **Internal Assessment** | Capability gaps, improvement priorities, investment needs (drop M&A terminology) | + +## Working Principles + +- **Focus on materiality.** Prioritize issues that could kill the deal or change the price by more than 5%. Don't spend equal time on everything. +- **Triangulate everything.** Management tells one story. The data room tells another. Customers and suppliers tell a third. The truth is somewhere in the overlap. +- **Red flags are negotiation tools, not always walk-away signals.** A customer concentration risk discovered in DD becomes a price adjustment or an earn-out structure. +- **Document all assumptions and limitations.** What you couldn't verify is as important as what you confirmed. Future you (or the lawyer) will need to know. +- **Connect findings to valuation.** Every DD finding should translate to "and that means the deal is worth more/less/the same because..." (See Bridging DD Findings to Valuation section.) +- **Start integration planning during DD.** The information you gather during DD is the foundation for the integration plan. Don't throw it over the wall and start fresh. +- **Operational DD reveals upside.** Financial DD finds problems. Operational DD often finds improvement opportunities, inefficiencies the acquirer can fix, capabilities the acquirer can scale. +- **Talk to customers and suppliers when possible.** Management representations are necessary but insufficient. External validation changes the picture more often than you'd expect. diff --git a/management-consulting/skills/implementation-planning/SKILL.md b/management-consulting/skills/implementation-planning/SKILL.md new file mode 100644 index 00000000..822a0f4e --- /dev/null +++ b/management-consulting/skills/implementation-planning/SKILL.md @@ -0,0 +1,644 @@ +--- +name: implementation-planning +description: Bridge strategy to execution for consulting engagements through option evaluation, business case construction, roadmap design, and implementation planning. Use when translating strategic recommendations into investable, executable plans with workstreams, milestones, resource requirements, and governance. Covers option generation and scoring, cost-benefit analysis, phasing, dependencies, and risk-adjusted timelines. +--- + +# Implementation Planning + +Translate strategic recommendations into concrete, funded, governed plans that organizations can actually execute. This covers four connected stages: generating and evaluating options, building the business case, designing the roadmap, and developing the implementation plan. + +## Before You Begin + +Implementation plans built on assumptions instead of facts produce plans nobody can execute. Ask for: +- What is the strategic recommendation or decision that this plan serves? +- What budget and resource constraints exist? +- What is the organizational capacity for change (other initiatives underway, team bandwidth)? +- When using cost estimates, benefit projections, or timeline assumptions the user hasn't confirmed, label them as illustrative: "I'm assuming X based on [comparable engagements / industry benchmarks]. Please confirm or provide your actual figure." + +--- + +## The Strategy-to-Execution Arc + +This skill covers that gap systematically. + +The four stages flow naturally but don't always run sequentially. Sometimes you start with options because the path isn't clear. Sometimes the recommendation is already made and you need to jump straight to business case and planning. Meet the work where it is. + +### When to Start Where + +| Situation | Start At | Skip | +|-----------|----------|------| +| Decision not yet made, multiple paths possible | Stage 1 (Options) | Nothing | +| Recommendation exists, needs funding approval | Stage 2 (Business Case) | Stage 1 | +| Approved and funded, needs execution plan | Stage 3 (Roadmap) | Stages 1-2 | +| Roadmap exists, needs operational detail | Stage 4 (Implementation) | Stages 1-3 | +| Existing plan needs restructuring | Diagnose first, then appropriate stage | Depends | + +### Depth Calibration + +Match output depth to the request: + +- **Decision memo** (500-1,000 words): Executive summary, option comparison table, recommendation. Use when time-pressed or early-stage. +- **Working analysis** (2,000-4,000 words): Full option evaluation, financial summary, phased roadmap. Standard deliverable. +- **Deep dive** (4,000-8,000 words): All four stages with detailed financials, workstreams, governance. Full implementation plan. + +When not specified, default to working analysis depth. + +--- + +## Stage 1: Strategic Option Development + +Before committing to a path, develop genuinely different options. Not three flavors of the same thing... structurally different approaches that represent real strategic choices. + +### Define the Decision + +Clarify what's being decided before generating options. + +- **The Question**: What exactly needs to be decided? One sentence. +- **Current State**: Where we are today, with data. +- **Desired State**: Where we want to be. +- **Constraints**: Budget limits, timeline requirements, organizational capacity, regulatory boundaries. +- **Success Criteria**: What matters most in evaluating options, and how much each criterion matters relative to the others. + +Define weighted evaluation criteria upfront: + +| Criterion | Weight | Definition of Success | +|-----------|--------|----------------------| +| [Criterion 1] | X% | [What "good" looks like] | +| [Criterion 2] | X% | [What "good" looks like] | +| [Criterion 3] | X% | [What "good" looks like] | +| Total | 100% | | + +### Generate Options + +Create 3-5 genuinely differentiated options. Each should represent a distinct strategic posture, not just incremental variations. Always include "do nothing" as a baseline. + +For each option: + +- **Name and description**: What this option entails in plain language +- **Approach**: The key moves and sequencing +- **Pros and cons**: Honest assessment, not a sales pitch +- **Resource requirements**: Investment, timeline, capabilities needed +- **Risk profile**: What could go wrong, and how badly + +Good option sets include a range: a conservative option, an aggressive option, and something in between. If all your options look similar, you haven't explored the space. + +**The "Do Nothing" Baseline**: This is not a throwaway. Model it with the same rigor as other options: +- Projected costs over 3-5 years (including cost escalation) +- Competitive deterioration (what happens as competitors move) +- Opportunity cost (revenue/margin you forgo) +- Risk accumulation (regulatory, technical debt, talent attrition) + +The "do nothing" baseline is the denominator against which all investment returns are measured. A weak baseline makes every option look good. A rigorous one forces honest comparison. + +### Evaluate Options + +Score each option against the weighted criteria: + +| Criterion | Weight | Option A | Option B | Option C | Do Nothing | +|-----------|--------|----------|----------|----------|------------| +| [Criterion 1] | X% | [1-5] | [1-5] | [1-5] | [1-5] | +| [Criterion 2] | X% | [1-5] | [1-5] | [1-5] | [1-5] | +| [Criterion 3] | X% | [1-5] | [1-5] | [1-5] | [1-5] | +| **Weighted Score** | **100%** | **X.X** | **X.X** | **X.X** | **X.X** | + +Scoring guide: +- 5 = Fully meets criterion +- 4 = Substantially meets criterion +- 3 = Partially meets criterion +- 2 = Minimally meets criterion +- 1 = Does not meet criterion + +Beyond the quantitative scoring, assess each option qualitatively: +- **Strategic fit**: How well does it align with broader organizational direction? +- **Feasibility**: Can this organization actually pull this off? +- **Stakeholder support**: Will the people who matter get behind it? +- **Reversibility**: How hard is it to course-correct if this doesn't work? + +### Scenario-Test the Recommendation + +Stress-test the leading option(s) under different conditions. Define scenarios with specific assumptions, not just "things go well/badly." + +| Scenario | Key Assumptions | Option A NPV | Option B NPV | Do Nothing Cost | +|----------|----------------|-------------|-------------|-----------------| +| Optimistic | [e.g., 20% faster adoption, 10% lower costs] | $XX | $XX | $XX | +| Base | [e.g., plan assumptions hold] | $XX | $XX | $XX | +| Pessimistic | [e.g., 6-month delay, 25% cost overrun] | $XX | $XX | $XX | + +Run sensitivity analysis on the key variables. Identify break-even thresholds: at what point does the recommendation flip? + +- Which criteria weights, if shifted, would change the recommendation? +- Which cost or benefit assumptions have the biggest swing on NPV? +- What's the break-even adoption rate / timeline / cost level? + +If small changes flip the answer, the decision is closer than it looks. Say so. + +### Make the Recommendation + +State it clearly. A recommendation has: + +1. **The choice**: Which option, stated without hedging +2. **The rationale**: Why this option, grounded in the evaluation +3. **The trade-offs**: What you're giving up, and why that's acceptable +4. **The fallback**: If this option becomes infeasible, what's the next-best alternative +5. **Immediate next steps**: What happens right now to move forward + +--- + +## Stage 2: Business Case + +The business case answers one question: is this worth doing? It must convince decision-makers to fund the initiative and provide the financial framework for tracking value delivery. + +### Structure the Case + +A business case that works has these sections, roughly in this order: + +1. **Executive Summary** (stands alone, one page max) +2. **Problem Statement and Cost of Inaction** +3. **Current State with Baseline Metrics** +4. **Proposed Solution and Future State** +5. **Financial Analysis** (with confidence levels) +6. **Implementation Overview** +7. **Risks and Mitigations** +8. **Recommendation and "So What"** + +### Executive Summary + +This is the decision brief. Many stakeholders will read nothing else. It must contain the problem, the solution, the investment required, the expected return, and a clear recommendation. + +- **The Challenge**: 1-2 sentences on the problem +- **The Opportunity**: What we can achieve +- **The Investment**: Total capital and operating cost +- **The Return**: NPV, IRR, payback period, ROI +- **The Recommendation**: Proceed / Do not proceed / Proceed with conditions + +### Problem Statement and Cost of Inaction + +Quantify the problem. "We're losing money" is not a problem statement. "We're losing $4.2M annually in customer churn driven by post-sale service failures" is. + +| Pain Point | Annual Impact | Frequency | Trend | +|------------|--------------|-----------|-------| +| [Pain 1] | $[Amount] | [How often] | [Getting better/worse] | +| [Pain 2] | $[Amount] | [How often] | [Getting better/worse] | + +Calculate the cost of inaction explicitly. What happens over 3-5 years if we do nothing? This is the baseline against which the investment is compared. + +| Cost of Inaction | Year 1 | Year 2 | Year 3 | Cumulative | +|------------------|--------|--------|--------|------------| +| [Cost driver 1] | $X | $X | $X | $X | +| [Cost driver 2] | $X | $X | $X | $X | +| **Total** | **$X** | **$X** | **$X** | **$X** | + +### Current State Baseline + +Establish measurable baselines. You can't show improvement without a starting point. + +| Metric | Current Value | Industry Benchmark | Gap | +|--------|--------------|-------------------|-----| +| [Metric 1] | [Value] | [Benchmark] | [Gap] | +| [Metric 2] | [Value] | [Benchmark] | [Gap] | + +### Financial Analysis + +This is the core of the business case. Build it in layers, and tag each line with a confidence level. + +**Confidence levels**: Mark each projection line: +- **High**: Based on actual data, contracts, or historical performance +- **Medium**: Based on reasonable estimates, benchmarks, or analogous cases +- **Low**: Dependent on assumptions that haven't been validated (negotiation outcomes, market conditions, adoption rates) + +**Investment Required** (what we're spending): + +| Category | Confidence | Year 0 | Year 1 | Year 2 | Year 3 | Total | +|----------|-----------|--------|--------|--------|--------|-------| +| Capital expenditure | [H/M/L] | $X | $X | $X | $X | $X | +| Implementation costs | [H/M/L] | $X | $X | | | $X | +| Change management | [H/M/L] | $X | $X | | | $X | +| Training | [H/M/L] | $X | $X | | | $X | +| Contingency (10-15%) | | $X | $X | $X | $X | $X | +| **Total Investment** | | **$X** | **$X** | **$X** | **$X** | **$X** | + +**Benefits Realization** (what we're getting back): + +| Benefit | Type | Confidence | Year 1 | Year 2 | Year 3 | Total | +|---------|------|-----------|--------|--------|--------|-------| +| [Revenue benefit] | Top-line | [H/M/L] | $X | $X | $X | $X | +| [Cost reduction] | Bottom-line | [H/M/L] | $X | $X | $X | $X | +| [Risk avoidance] | Quantified risk | [H/M/L] | $X | $X | $X | $X | +| [Productivity gain] | Efficiency | [H/M/L] | $X | $X | $X | $X | +| **Total Benefits** | | | **$X** | **$X** | **$X** | **$X** | + +**Return Metrics**: + +| Metric | Value | Hurdle Rate | Assessment | +|--------|-------|-------------|------------| +| NPV | $XX | > $0 | [Pass/Fail] | +| IRR | XX% | [Org hurdle] | [Pass/Fail] | +| Payback Period | X years | [Org threshold] | [Pass/Fail] | +| ROI | XX% | [Org threshold] | [Pass/Fail] | + +**Sensitivity Analysis** (how robust are these numbers): + +Test the variables with the biggest swing on NPV: + +| Variable | -20% | Base Case | +20% | NPV Swing | +|----------|------|-----------|------|-----------| +| Benefits realization | $XX | $XX | $XX | $XX | +| Cost overrun | $XX | $XX | $XX | $XX | +| Timeline delay | $XX | $XX | $XX | $XX | +| Adoption rate | $XX | $XX | $XX | $XX | + +Order by NPV swing (largest first). The top 2-3 variables are where management attention should focus. State explicitly: "The business case remains positive even at [X]% benefit realization, which is the break-even threshold." + +**Total Cost of Ownership** (the real long-term cost): + +| Component | Years 1-3 | Years 4-5 | Years 6-10 | Total | +|-----------|-----------|-----------|------------|-------| +| Initial investment | $X | | | $X | +| Ongoing licensing/ops | $X | $X | $X | $X | +| Maintenance | $X | $X | $X | $X | +| Training & support | $X | $X | $X | $X | +| **TCO** | **$X** | **$X** | **$X** | **$X** | + +### Risk Assessment + +| Risk | Likelihood | Impact | Mitigation | Residual Risk | +|------|------------|--------|------------|---------------| +| [Risk 1] | [H/M/L] | [H/M/L] | [What we'll do] | [After mitigation] | +| [Risk 2] | [H/M/L] | [H/M/L] | [What we'll do] | [After mitigation] | + +### Business Case "So What" + +Close the business case with a synthesis that distills the entire analysis into three things: + +1. **The single most important number**: The one figure that justifies the investment (e.g., "$14.2M NPV over 5 years at a 72% confidence-weighted basis") +2. **The single biggest risk**: The one thing most likely to derail value delivery, and what you're doing about it +3. **The one thing that must go right**: The critical success factor without which the business case breaks (e.g., "achieving 80%+ adoption by month 6") + +This is not a summary. It's a synthesis: what does all this analysis mean for the decision? + +--- + +## Stage 3: Roadmap Design + +The roadmap answers: what happens when, and in what order? It translates the approved business case into a sequenced plan with phases, milestones, and dependencies. + +### Define the Planning Horizon + +- **Time horizon**: 1 year, 3 years, 5 years (match to initiative scope) +- **Number of phases**: Typically 3-4 (more phases means less clarity per phase) +- **Phase logic**: What drives the sequencing? Dependencies, risk reduction, value delivery, organizational readiness? + +### Design Phase Structure + +Each phase needs a clear objective, a reason it comes in this order, and criteria for moving to the next phase. + +``` +## Phase 1: [Name] ([Duration]) + +### Objective +[What this phase achieves and why it comes first] + +### Key Initiatives +| Initiative | Description | Priority | Dependencies | +|------------|-------------|----------|--------------| +| [Initiative 1] | [What it involves] | [High/Med] | [What must come first] | +| [Initiative 2] | [What it involves] | [High/Med] | [What must come first] | + +### Deliverables +- [Concrete deliverable 1] +- [Concrete deliverable 2] + +### Success Criteria +| Criterion | Target | How Measured | +|-----------|--------|-------------| +| [Criterion 1] | [Specific target] | [Measurement method] | +| [Criterion 2] | [Specific target] | [Measurement method] | +``` + +Repeat for each phase. Common phase patterns: + +- **Foundation / Build / Scale**: When you need infrastructure before you can build, and proof before you can scale +- **Quick Wins / Core Transformation / Optimization**: When early momentum matters for stakeholder buy-in +- **Design / Pilot / Rollout**: When uncertainty is high and you need to test before committing + +### Map Dependencies + +Dependencies drive the critical path. Map them explicitly: + +**Critical Dependencies** (delay here delays everything): + +| Initiative | Depends On | Impact if Delayed | Mitigation | +|------------|------------|-------------------|------------| +| [Initiative] | [Predecessor] | [What breaks] | [What we'll do] | + +**Resource Dependencies** (shared resources across workstreams): +- [Which teams/people are shared across initiatives] + +**External Dependencies** (things outside your control): +- [Vendor deliveries, regulatory approvals, market conditions] + +### Milestone Planning + +Define the milestones that mark real progress (not just calendar dates): + +| Milestone | Target Date | Phase | Success Criteria | Dependencies | +|-----------|-------------|-------|------------------|--------------| +| M1 | [Date] | Phase 1 | [How we know we're there] | [What must be done] | +| M2 | [Date] | Phase 1 | [How we know we're there] | [M1] | +| M3 | [Date] | Phase 2 | [How we know we're there] | [M2] | + +Good milestones are: +- Observable (you can tell whether they happened) +- Meaningful (they represent real progress, not just elapsed time) +- Decision-relevant (they inform go/no-go choices) + +### Resource Requirements by Phase + +| Resource Category | Phase 1 | Phase 2 | Phase 3 | Total | +|-------------------|---------|---------|---------|-------| +| FTEs | X | X | X | X | +| Capital | $X | $X | $X | $X | +| Operating | $X | $X | $X | $X | + +--- + +## Stage 4: Implementation Plan + +The implementation plan is the most granular level. It translates the roadmap into workstreams with named owners, specific deliverables, governance mechanisms, and change management touchpoints. + +### Mobilization Plan + +What happens in the first days after approval. Don't leave a gap between "approved" and "started." + +| Timeframe | Actions | Owner | +|-----------|---------|-------| +| Day 1 | Confirm sponsorship, announce program, establish program office | Sponsor / Program Lead | +| Week 1 | Staff core team, set up governance cadence, issue data requests, schedule kickoff | Program Lead | +| Month 1 | Complete current-state assessment, finalize detailed plan, launch first workstreams, deliver first steering committee update | Program Lead / WS Leads | + +### Define Workstreams + +Break the implementation into logical workstreams. Each workstream should be: +- Independently manageable (one lead, clear scope) +- Small enough to track (2-4 week deliverable cycles) +- Large enough to be meaningful (not task lists) + +| Workstream | Description | Lead | Key Deliverables | Dependencies | +|------------|-------------|------|-----------------|--------------| +| [WS 1] | [What it covers] | [Name] | [Deliverables] | [Other WS] | +| [WS 2] | [What it covers] | [Name] | [Deliverables] | [Other WS] | +| [WS 3] | [What it covers] | [Name] | [Deliverables] | [Other WS] | + +### Stakeholder Impact and Alignment + +Who is affected, what they care about, and what needs to happen to bring them along. + +| Stakeholder Group | Impact | What They Care About | Current Stance | Actions to Align | +|-------------------|--------|---------------------|----------------|------------------| +| [Group 1] | [H/M/L] | [Key concerns] | [Supportive/Neutral/Resistant] | [Specific actions] | +| [Group 2] | [H/M/L] | [Key concerns] | [Supportive/Neutral/Resistant] | [Specific actions] | + +### Change Management Touchpoints + +Map where change management activities plug into the implementation timeline. (For deep change management planning, see the change-management skill.) + +| Phase | Change Activity | Timing | Owner | +|-------|----------------|--------|-------| +| Phase 1 | Stakeholder impact assessment | Week 2-3 | Change Lead | +| Phase 1 | Leadership alignment sessions | Week 3-4 | Sponsor + Change Lead | +| Phase 1 | Communication plan launch | Week 4 | Change Lead | +| Phase 2 | Training needs analysis | Start of phase | Change Lead + WS Leads | +| Phase 2 | Role transition planning | Mid-phase | HR + Change Lead | +| Phase 2 | Pilot feedback loops | Ongoing | WS Leads | +| Phase 3 | Full training rollout | Start of phase | Change Lead | +| Phase 3 | Adoption monitoring and intervention | Ongoing | Change Lead | +| Phase 3 | Sustainment handover | End of phase | Change Lead + Operations | + +### Develop Detailed Timeline + +Build the timeline phase by phase with milestones and owners: + +``` +## Phase 1: [Name] ([Duration]) +Objective: [What we achieve] + +| Milestone | Target | Dependencies | Owner | +|-----------|--------|--------------|-------| +| [M1] | [Date] | [None] | [Name] | +| [M2] | [Date] | [M1] | [Name] | + +## Phase 2: [Name] ([Duration]) +Objective: [What we achieve] + +| Milestone | Target | Dependencies | Owner | +|-----------|--------|--------------|-------| +| [M3] | [Date] | [M2] | [Name] | +| [M4] | [Date] | [M3] | [Name] | +``` + +### Critical Path + +Identify the longest dependency chain that drives the overall timeline: + +- [Activity A] -> [Activity B] -> [Activity C] -> [Final milestone] +- Any delay on the critical path directly delays the project end date +- Non-critical activities have float. Quantify it for each so you know where you have slack + +### RACI Matrix + +Define who does what. One accountable owner per deliverable. Shared accountability means no accountability. + +| Activity / Deliverable | Sponsor | Program Lead | WS Lead | Team | Client SMEs | +|------------------------|---------|--------------|---------|------|-------------| +| [Deliverable 1] | A | R | C | I | C | +| [Deliverable 2] | I | A | R | R | C | +| [Key decision] | A | R | C | I | I | + +- **R** = Responsible (does the work) +- **A** = Accountable (one per activity, owns the outcome) +- **C** = Consulted (input before the decision) +- **I** = Informed (told after the decision) + +Rules: One "A" per activity. At least one "R". Minimize "C" to avoid bottlenecks. "A" and "I" should not be the same person for the same activity. + +### Resource Allocation + +| Role | Workstream | FTE | Duration | Skills Required | +|------|------------|-----|----------|----------------| +| [Role 1] | [WS 1] | X.X | [Time] | [Skills] | +| [Role 2] | [WS 2] | X.X | [Time] | [Skills] | + +Budget by workstream: + +| Workstream | Labor | External | Other | Total | +|------------|-------|----------|-------|-------| +| [WS 1] | $X | $X | $X | $X | +| [WS 2] | $X | $X | $X | $X | +| Contingency (10-15%) | | | | $X | +| **Total** | **$X** | **$X** | **$X** | **$X** | + +### Benefits Realization Tracking + +Define who owns measurement after the business case is approved. Benefits don't track themselves. + +| Benefit | Metric | Baseline | Target | Measurement Frequency | Owner | Data Source | +|---------|--------|----------|--------|-----------------------|-------|-------------| +| [Benefit 1] | [KPI] | [Current] | [Target] | [Monthly/Quarterly] | [Name] | [System/report] | +| [Benefit 2] | [KPI] | [Current] | [Target] | [Monthly/Quarterly] | [Name] | [System/report] | + +Review benefits realization at every steering committee. If benefits are tracking below plan, escalate early with a recovery plan or revised forecast. + +### Governance Structure + +**Meeting Cadence**: + +| Forum | Frequency | Attendees | Purpose | Duration | +|-------|-----------|-----------|---------|----------| +| Steering Committee | Bi-weekly | Sponsor, Program Lead, WS Leads | Decisions, escalations, benefits tracking | 60 min | +| Program Review | Weekly | Program Lead, WS Leads | Progress, risks, dependencies | 45 min | +| Workstream Standup | 2-3x/week | WS Lead, Team | Task coordination, blockers | 15 min | + +**Phase Gates** (prevent premature transitions): + +| Gate | Transition | Entry Criteria | Exit Criteria | Approver | +|------|-----------|----------------|---------------|----------| +| G1 | Foundation -> Build | [Requirements defined, team staffed] | [Assessment complete, design approved] | [Sponsor] | +| G2 | Build -> Deploy | [Solutions developed, tested] | [Pilot successful, rollout plan approved] | [Sponsor] | +| G3 | Deploy -> Operate | [Full rollout complete] | [Adoption targets met, handover done] | [Sponsor] | + +**Escalation Path**: + +- **Level 1**: WS Lead resolves within 24 hours +- **Level 2**: Program Lead resolves within 48 hours +- **Level 3**: Steering Committee resolves at next meeting (or emergency session) + +**Change Control**: + +| Change Type | Approval Required | Process | +|-------------|-------------------|---------| +| Minor scope change | Program Lead | Document, assess impact, approve/reject | +| Major scope change | Steering Committee | Formal change request, impact analysis, Steering Committee decision | +| Timeline shift (< 2 weeks) | Program Lead | Update plan, notify stakeholders | +| Timeline shift (> 2 weeks) | Steering Committee | Root cause analysis, recovery plan, Steering Committee approval | +| Budget variance (< 10%) | Program Lead | Document, adjust within contingency | +| Budget variance (> 10%) | Steering Committee | Business case for additional funding | + +### Risk and Contingency + +| Risk | Impact | Probability | Mitigation | Owner | +|------|--------|-------------|------------|-------| +| [Risk 1] | [H/M/L] | [H/M/L] | [Specific action] | [Name] | +| [Risk 2] | [H/M/L] | [H/M/L] | [Specific action] | [Name] | + +Contingency plans for high-impact risks: +- If [trigger condition], then [specific response] +- If [trigger condition], then [specific response] + +--- + +## Program Recovery + +Sometimes the plan isn't delayed... it's fundamentally off track. The timeline is fiction, the assumptions were wrong, the workstreams are producing the wrong things, or the organization has changed around the plan. This requires recovery, not just replanning. + +### Recognizing "Off Track" vs. "Delayed" + +A delayed program is executing the right plan slowly. A program that needs recovery is executing the wrong plan, or the right plan in the wrong conditions. + +| Signal | Delayed (Replan) | Off Track (Recovery) | +|--------|-----------------|---------------------| +| Timeline | Milestones slipping 2-4 weeks | Milestones slipping months, or milestones no longer meaningful | +| Scope | Original scope is right, execution is slow | Scope was wrong, or the problem has changed since scoping | +| Stakeholders | Same people, lower energy | Sponsor has changed, key stakeholders have disengaged, or new stakeholders have emerged who weren't in the plan | +| Team | Team is struggling but on-mission | Team doesn't believe in the plan, or critical capabilities are missing | +| Assumptions | Assumptions mostly held, some variance | Multiple foundational assumptions were wrong | +| Deliverables | Right outputs, behind schedule | Deliverables are being produced but nobody is using them, or they don't solve the problem | + +### When to Revise the Plan vs. Escalate That Assumptions Were Wrong + +This is a judgment call, but there are clear thresholds. + +**Revise the plan when:** +- The variance is recoverable within existing authority and budget (e.g., reprioritize workstreams, adjust sequencing, add two weeks to a phase) +- The core hypothesis is still right; execution just needs adjustment +- The sponsor and steering committee would agree with the adjustment if informed +- You can articulate the revised plan and its rationale in a steering committee update + +**Escalate when:** +- The business case no longer holds under current conditions (e.g., the cost has doubled, the benefits have halved, or the timeline pushes past the window of relevance) +- A foundational assumption was wrong and the plan was built on it (e.g., "we assumed the legacy system could be migrated in 6 months; it cannot be migrated at all") +- The sponsor needs to make a decision that's outside the program team's authority: kill it, restructure it, or significantly increase the investment +- You've already revised the plan once for the same issue and it's recurring + +**How to escalate well:** +- State the problem factually. "Three of our five foundational assumptions have not held" is better than "things aren't going well" +- Quantify the impact. What does this mean for timeline, cost, and benefits? +- Present options, not just the problem. "We can (a) restructure around the new reality at +$X and +Y months, (b) reduce scope to what's achievable within original constraints, or (c) pause and reassess whether the initiative is still warranted" +- Recommend one option and say why +- Don't wait. The worst escalations are the ones that come 3 months late because the team was hoping things would improve + +### Running a Program Recovery + +If the program needs genuine recovery (not just a revised timeline), treat it as a distinct exercise. + +**Step 1: Stop and assess (1-2 weeks)** + +Pause forward progress on workstreams that aren't delivering value. Don't pause workstreams that are working... recovery doesn't mean grinding everything to a halt. + +Conduct an honest current-state assessment: +- What has been delivered and is it usable? +- Which assumptions held and which didn't? +- What has changed in the business environment since the plan was made? +- Where is the team's energy and capability? +- What does the sponsor actually need now (which may be different from what was scoped)? + +**Step 2: Redefine success (1 week)** + +With the sponsor, answer: given what we now know, what does a successful outcome look like? This may be dramatically different from the original business case. A program scoped to transform five business units may need to become a program that successfully transforms two. + +**Step 3: Rebuild the plan (1-2 weeks)** + +Using the same framework from Stages 3-4 of this skill, but with key differences: +- Start from what's been delivered, not from zero. Sunk costs are sunk, but completed work may still have value +- Be ruthless about scope. Cut anything that isn't essential to the redefined success criteria +- Build in more contingency than the original plan (the original plan already proved too optimistic) +- Shorten the planning horizon. A recovered program should plan in 90-day increments with explicit go/no-go gates + +**Step 4: Restabilize governance and team** + +- Reconfirm (or change) the sponsor and steering committee. If the sponsor has lost confidence, the program needs a new one +- Assess whether the team has the right capabilities for the revised plan. Recovery often requires different skills than the original effort +- Reset cadences and escalation paths. The old ones clearly weren't catching problems early enough + +**Common recovery traps:** +- Adding resources to a plan that's conceptually wrong (throwing people at the wrong problem) +- "Recovery theater" where you replan on paper but change nothing about how work is done +- Blaming the team when the plan was the problem +- Treating recovery as shameful rather than as responsible program management. Good programs recover; bad ones pretend everything is fine until they fail + +--- + +## Connecting the Stages + +Each stage feeds the next: + +- **Options -> Business Case**: The recommended option becomes the "proposed solution" in the business case. Option evaluation criteria inform risk assessment. +- **Business Case -> Roadmap**: Approved investment envelope constrains the roadmap. Benefits realization timeline shapes phasing. +- **Roadmap -> Implementation Plan**: Phase structure becomes the implementation timeline. Milestones become governance checkpoints. + +--- + +## Key Principles + +- **Quantify everything.** "Significant savings" convinces no one. "$4.2M annually" does. +- **Be honest about trade-offs.** Every option has downsides. Hiding them destroys credibility. +- **Tag your confidence.** Distinguish "high confidence, based on actuals" from "estimate, +/- 30%." Decision-makers need to know what's solid and what's a bet. +- **One accountable owner per deliverable.** Shared accountability is no accountability. +- **The executive summary must stand alone.** It's often the only thing that gets read. +- **Build in contingency.** 10-15% on both timeline and budget as a starting point; adjust based on project complexity, uncertainty, and organizational risk appetite. Things will go wrong. +- **Include quick wins early.** They build momentum and stakeholder confidence. +- **The critical path drives everything.** Know it, protect it, track it. +- **Phase gates prevent premature transitions.** Enforce them even when there's pressure to move faster. +- **Match ambition to organizational capacity.** An aggressive plan that an organization can't absorb is worse than a modest plan it can execute. +- **Always include "do nothing" as a baseline.** Model it with the same rigor as active options. +- **Connect every initiative to a strategic objective.** If you can't, question why it's in the plan. +- **End with synthesis, not summary.** State the single most important implication, not a recap of what you covered. diff --git a/management-consulting/skills/org-design/SKILL.md b/management-consulting/skills/org-design/SKILL.md new file mode 100644 index 00000000..98e1f185 --- /dev/null +++ b/management-consulting/skills/org-design/SKILL.md @@ -0,0 +1,411 @@ +--- +name: org-design +description: Design organizational structures, operating models, and role frameworks. Use when restructuring organizations, defining reporting relationships, designing job architectures, aligning org design with strategy, planning spans of control, or building transition roadmaps for new organizational models. +--- + +# Organizational Design + +Structure organizations to execute their strategy through operating models, reporting structures, role frameworks, and transition plans. Org design is a people-affecting discipline that requires connecting structure to strategy and careful change management. + +## Before You Begin + +Org design has direct people impact, so working from accurate information matters. Ask for: +- What is the strategic driver for the redesign (growth, cost reduction, M&A integration, capability gap)? +- What is the current structure (headcount, layers, key functions)? +- What has been tried before, and what constraints exist (union agreements, regulatory requirements, cultural norms)? +- Don't generate specific headcount numbers, reporting lines, or span-of-control data. Present benchmarks as typical ranges and ask the user to confirm how their organization compares. + +--- + +## The Org Design Process + +### Step 1: Assess Strategic Requirements + +Before drawing boxes and lines, define what the organization must do to execute its strategy. Structure follows strategy, not the other way around. + +**Strategic alignment analysis:** + +- **Strategic priorities**: What 3-5 things must the organization do well to win? +- **Capability requirements**: What capabilities does each priority demand? +- **Critical success factors**: What must go right for each capability to work? + +**Operating model choices** (the big decisions that shape everything downstream): + +| Dimension | Question to Answer | +|---|---| +| Vertical integration | What do we do ourselves vs. outsource? | +| Centralization | What decisions sit at the center vs. the edge? | +| Geographic structure | How do we organize across locations? | +| Product/service alignment | Do we organize around what we sell or who we sell to? | +| Customer segmentation | Do customer segments warrant separate structures? | +| Agility model | Traditional hierarchy, agile pods, or hybrid? | + +**Customer segmentation as a design input:** + +Don't assume segments are pre-defined. The segmentation model is itself a structural decision. Common approaches: + +| Basis | When It Works | Structural Implication | +|---|---|---| +| Size (enterprise / mid-market / SMB) | Segments have fundamentally different buying processes and service needs | Separate sales motions, possibly separate delivery teams | +| Industry vertical | Deep domain knowledge drives win rates or delivery quality | Vertical P&Ls, industry practice leads | +| Needs-based (complex / standard / self-serve) | Product/service complexity varies more than industry | Tiered service model, different cost-to-serve structures | +| Geography | Regulatory, language, or cultural differences dominate | Regional or country-level autonomy | +| Value / profitability | 80/20 dynamics are extreme | Key account structures, differentiated service levels | + +Test the segmentation: if two proposed segments would be served by the same team, using the same processes, with the same economics, they're not separate segments structurally. Merge them. + +**Business model analysis:** + +- How does the organization create value? +- What are the major cost drivers? Which are structural vs. operational? +- What key partnerships or ecosystem relationships constrain or enable structural choices? + +**Capability gap assessment:** + +For each required capability, assess current maturity (1-5) against target maturity. The biggest gaps become the structural priorities. A capability gap that sits at 2 today but needs to be at 5 is a structural problem, not a training problem. + +### Step 2: Analyze Current State + +Document and diagnose the existing organization honestly. Most redesigns fail because they don't understand what's actually happening (vs. what the org chart says). + +**Structural baseline:** + +- Structure type: Functional, matrix, divisional, network, hybrid +- Number of layers from CEO to front line +- Average span of control by level +- Total headcount by function/division + +**7S assessment** (McKinsey's framework, useful because it forces you beyond just structure): + +| Element | What to Assess | +|---|---| +| Strategy | Is the strategy clear and understood? | +| Structure | Does the reporting structure support strategy execution? | +| Systems | Do processes, tools, and IT support the work? | +| Shared values | Is there alignment on culture and purpose? | +| Style | How do leaders actually lead? | +| Staff | Do we have the right people in the right roles? | +| Skills | Do we have the capabilities we need? | + +Rate each 1-5 for alignment. The misaligned elements are your design targets. + +**Pain point diagnosis:** + +Identify the actual problems people experience, not just the structural aesthetics you'd prefer. Common symptoms of structural misalignment: + +- Slow decision-making (too many layers, unclear authority) +- Duplication of effort (overlapping mandates) +- Silos preventing collaboration (wrong integration mechanisms) +- Key work falling between the cracks (structural white space) +- Talent bottlenecks (wrong spans, missing career paths) + +**Process overlay:** + +Map how work actually flows across the current structure. Where are the handoffs? Where do things slow down? Where do workarounds exist? The informal organization often matters more than the formal one. + +### Step 3: Design the Future State + +Start with design principles, then evaluate structural options, then detail the selected design. + +**Design principles** (5-7 rules that guide every structural decision): + +Good design principles are specific enough to resolve trade-offs. "Customer-centric" is too vague. "Customer segment leaders have P&L authority and direct control over product, sales, and service for their segment" resolves actual decisions. + +Examples of principles that actually do work: + +- Decisions are made at the lowest level with adequate information +- No more than 6 layers from CEO to front line +- Every role has a clear single point of accountability +- Shared services consolidate where scale matters; embed where speed matters +- Digital and technology capability is built into value streams, not bolted on + +**Structural options to evaluate:** + +| Structure Type | Best When | Watch Out For | +|---|---|---| +| Functional | Single product/service, scale matters, expertise depth needed | Silos, slow cross-functional work | +| Divisional (product) | Multiple distinct products, end-to-end accountability needed | Duplication, sub-scale functions | +| Divisional (geographic) | Local market differences matter, regulatory variation | Inconsistency, duplication | +| Divisional (customer) | Distinct customer segments with different needs | Complexity if segments overlap | +| Matrix | Two dimensions equally important (e.g., product AND geography) | Dual reporting confusion, slow decisions, conflict | +| Network/agile | Fast-moving markets, innovation priority, knowledge work | Coordination cost, governance gaps, career path ambiguity | +| Platform + value streams | Digital businesses, shared infrastructure with diverse products | Platform team becomes bottleneck | + +For each viable option, assess: +- How well it supports each strategic priority +- Implementation complexity and transition risk +- Cultural fit (how far is this from how people work today?) + +**Making matrix structures work:** + +Matrix is the most common structural choice and the most commonly botched. If two dimensions are genuinely equally important and you can't avoid it, here's how to make it survivable: + +| Design Element | What to Get Right | +|---|---| +| Primary vs. secondary axis | One axis owns the P&L and performance rating. The other influences. Pretending both are equal guarantees neither works. | +| Decision rights | Map the 20-30 most common decisions explicitly. For each: who decides, who inputs, who's informed. If this feels tedious, it's because it is. Do it anyway. | +| Conflict escalation | Pre-agree escalation paths and timelines. "Unresolved conflicts escalate to the COO within 48 hours" beats "work it out." | +| Relationship, not just structure | Matrix only works when the people at the intersections actively build relationships. Budget time for cross-axis team-building, joint planning, co-located offsites. | +| Performance management | Both axes provide input to performance reviews. Weight the primary axis at 60-70%, secondary at 30-40%. The individual's manager (primary axis) owns the final rating. | +| Resource allocation | Create a resource allocation forum that meets monthly, not ad hoc. Matrix fights are usually resource fights. Give them a structured venue. | + +Signs the matrix is failing: decisions take twice as long as before, people attend twice as many meetings, the same issue gets escalated repeatedly, managers spend more time in coordination meetings than with their teams. If you see these, simplify. + +**Cultural context in org design:** + +Org design that ignores cultural norms produces structures that look right on paper and fail in practice. This matters most when designing across geographies, post-merger, or when the client's culture differs from the consulting team's assumptions. + +Before designing across cultural contexts, assess these dimensions for each operating environment: + +| Dimension | Questions to Ask | What the Answers Tell You | +|-----------|-----------------|--------------------------| +| Decision-making | How are major decisions actually made? Is authority held by individuals or shared through consultation? Do decisions stick once made, or do they get revisited? | Whether to design for single decision-makers or consensus mechanisms. Slower consensus-based decisions often produce faster adoption. | +| Communication and feedback | Do people raise concerns publicly or privately? Is direct feedback welcomed or seen as confrontational? How much context is assumed vs. stated explicitly? | Whether decision rights documents and RACI charts will function as written, or whether informal relationship networks carry more weight. Shapes governance forum design. | +| Hierarchy and status | How many levels does information travel through? Is skip-level access normal or a serious breach? Do titles carry social meaning beyond the org chart? | Whether flat structures and wide spans will be accepted or destabilizing. Removing layers in a hierarchy-sensitive context can undermine the structure you're trying to build. | +| Accountability | Are people measured and rewarded as individuals or as teams? Is tenure and loyalty valued alongside performance? | Whether individual KPIs and incentive structures will drive the intended behavior, or whether team-based metrics are more culturally appropriate. | +| Conflict resolution | Is open disagreement acceptable in meetings? How is "saving face" managed? Are escalation paths used or avoided? | Whether governance forums can include open debate, or whether mediation mechanisms and private channels are needed to surface real disagreements. | + +**Practical implications:** +- Don't assume one structural model works across all operating contexts. What looks like dysfunction from one cultural perspective may be effective practice in another. +- Post-merger designs across cultures need explicit bridging mechanisms: joint planning sessions, cultural onboarding, and patient timeline expectations +- When in doubt, observe how the current organization actually makes its best decisions and design to strengthen that pattern, not replace it with a textbook model + +**Handling the predetermined answer:** + +The most common org design engagement: the CEO (or senior leader) has already decided on the structure and has hired you to validate it. Sometimes they'll say so directly. More often, they'll frame it as "we want your objective assessment" while making their preferred answer obvious. + +This is not inherently a problem. Leaders often have good instincts about structure, and their buy-in is essential for implementation. The problem is when the predetermined answer is wrong, or when the process of validating it skips the work needed to make any answer succeed. + +*How to recognize it:* +- The CEO presents "the options" but one option is clearly developed in more detail than the others +- Certain structural choices are described as "already decided" before the analysis +- You're told the answer but asked to "build the case" or "stress-test it" +- Stakeholder interviews reveal that the organization already knows the answer because the CEO has socialized it + +*How to handle it well:* + +1. **Acknowledge the hypothesis, don't fight it.** "You've clearly thought about this. Let us pressure-test it against the strategic requirements and implementation realities, and either confirm it's the right answer or identify where it needs refinement." This frames your role as adding rigor, not overriding judgment. + +2. **Do the work anyway.** Even if the answer is predetermined, the analysis that supports it is essential for implementation. Stakeholder mapping, capability gaps, transition planning, incentive alignment... all of this is needed whether the structure was your idea or theirs. The CEO needs you for the execution blueprint, not the answer. + +3. **If the predetermined answer is right (or close enough):** Confirm it with evidence, identify the 2-3 refinements that would make it stronger, and focus your energy on the transition plan. This is the most common outcome and a perfectly good engagement. + +4. **If the predetermined answer has a significant flaw:** Present the flaw as a risk to successful implementation, not as a criticism of their judgment. "This structure achieves X and Y, which are your top priorities. The risk is Z, which will create [specific problem]. Here are three ways to mitigate that risk within your preferred structure." Give them a way to adjust without losing face. + +5. **If the predetermined answer is fundamentally wrong:** This is rare but happens. Have the conversation privately with the CEO first, never in a steering committee. Use data and concrete examples: "In three comparable restructurings we've seen, this approach led to [specific outcome]. Here's what happened and why." Then propose an alternative that achieves what they were trying to achieve with their preferred structure. They chose that structure for a reason; understand the reason and solve for it differently. + +*What NOT to do:* +- Don't pretend you don't see the predetermined answer and run a "neutral" process that arrives at it by coincidence. Everyone sees through this and it wastes time. +- Don't capitulate and rubber-stamp an answer you believe is wrong. Your obligation is to give honest advice; that's what they're paying for. +- Don't present your alternative as the "right" answer and theirs as "wrong." Frame it as trade-offs and risks. + +**Detailing the selected design:** + +Once a structure is selected, define: + +- **Top-level architecture**: Major units, their mandates, and reporting lines +- **Structural dimensions**: Target layers, spans of control, authority distribution +- **Integration mechanisms**: How units coordinate (cross-functional teams, shared processes, liaison roles, communities of practice) +- **Governance**: Decision rights, escalation paths, committee structures +- **Work model**: Hybrid, remote-first, or co-located, and how this interacts with structure + +### Step 4: Define Roles and Job Architecture + +Structure without clear roles is just boxes on paper. This is where design becomes operational. + +**Key role definitions** (for every role that matters structurally): + +- **Purpose**: Why this role exists (one sentence) +- **Key accountabilities**: The 5-7 things this role is responsible for delivering +- **Decision authority**: What decisions this role makes independently vs. escalates +- **Relationships**: Reports to, peers with, manages, key stakeholders +- **Span of control**: Direct reports (target range) + +**Span of control guidance:** + +These ranges reflect common org design guidance and vary by organizational culture, technology, and management practices. + +| Work Type | Typical Span | Rationale | +|---|---|---| +| Routine, standardized work | 10-15 direct reports | Work is predictable, less supervision needed | +| Knowledge work, moderate complexity | 6-10 direct reports | Balance of coaching and autonomy | +| Complex, varied, senior work | 4-7 direct reports | High interaction needed | +| Highly creative, R&D, transformation | 3-5 direct reports | Intensive collaboration required | + +Spans outside these ranges usually signal a problem: too narrow means unnecessary layers; too wide means insufficient oversight or development. + +**Job family framework:** + +Group roles into job families with consistent leveling: + +- **Level definitions**: What scope, complexity, and autonomy look like at each level +- **Competency expectations**: What skills and behaviors are required at each level +- **Career pathways**: Vertical progression, lateral moves, diagonal (cross-functional) moves + +Career pathways matter for retention. If the new structure eliminates career paths people were counting on, you'll lose people you didn't intend to. + +**Compensation and incentive realignment:** + +Structural change without incentive realignment is an invitation for people to optimize the old structure while sitting in the new one. This is where most redesigns quietly fail. + +| Structural Change | Incentive Implication | What to Redesign | +|---|---|---| +| Product P&Ls to segment P&Ls | Sales comp tied to product revenue no longer aligns | Rebase sales targets to segment revenue; transition comp over 1-2 cycles with hold-harmless provisions | +| Centralize shared services | Functional leaders lose headcount, gain efficiency mandate | Tie incentives to service levels, cost-per-transaction, and internal NPS rather than headcount | +| Create cross-functional teams | Individual function metrics create perverse incentives | Add team-level metrics (30-40% of variable comp) alongside individual metrics | +| Flatten layers | Removed management roles leave people without clear next step | Create expert/principal tracks with compensation parity to management | +| Move to matrix | Two bosses, one comp review creates conflict | Primary axis owns comp decision; secondary axis provides structured input with defined weight | + +Principles for redesigning incentives during restructuring: + +- Change incentives at the same time as structure, not later. "We'll sort out comp in Phase 2" means Phase 1 won't work +- Use hold-harmless or transition guarantees (typically 6-12 months) to reduce resistance +- If the new structure creates genuinely lower-earning roles, acknowledge it and manage it; don't pretend it's not happening +- Test the new incentive model against 3-5 scenarios of actual behavior to check for gaming + +### Step 5: Plan the Transition + +The best org design fails if the transition is botched. People experience restructuring as personal, not organizational. + +**Phased implementation:** + +**Phase 1: Foundation (Weeks 1-4)** +- Finalize design and get leadership sign-off +- Develop detailed role profiles +- Assess current incumbents against new roles +- Identify people implications (moves, redundancies, new hires) + +**Phase 2: Communication (Weeks 5-8)** +- Align leadership team first (they must be advocates, not passengers) +- Train managers on the rationale, the "why," and how to have conversations +- Communicate to all employees with clarity and honesty +- Run Q&A sessions; don't hide from hard questions + +**Phase 3: Implementation (Weeks 9-16)** +- Match people to roles (selection processes where needed) +- Execute transitions in coordinated waves +- Launch new teams with clear charters +- Update processes, systems, and governance + +**Phase 4: Stabilization (Weeks 17-24)** +- Monitor for emerging issues +- Provide development support for people in stretched roles +- Fine-tune based on reality vs. design intent +- Measure against success metrics + +**Workforce transition mechanics:** + +This is the hardest part of any restructure and the part most often handled badly. Be specific. + +*Role matching and selection:* + +| Scenario | Approach | +|---|---| +| Role exists in new structure, incumbent is a clear fit | Direct slot ("you're confirmed in this role"). Do this first and fast to reduce anxiety for the majority. | +| Role exists, multiple credible candidates | Competitive selection process: defined criteria, structured interviews, transparent timeline. Criteria must be published before the process starts. | +| Role is eliminated, no equivalent exists | Redundancy. Be honest about it. Don't dress up a redundancy as "we're exploring options" for weeks. | +| New role, no internal candidate | External hire. Consider whether any displaced employees could be developed into it (redeployment). | + +*Selection criteria that actually work:* + +Don't just assess "who's good." Define 4-6 criteria specific to the new role's requirements, weighted by importance. Typical criteria: strategic capability for the new mandate (30%), leadership track record (25%), stakeholder relationships critical to the new role (20%), adaptability and learning agility (15%), team composition needs / diversity (10%). + +*Redeployment before redundancy:* + +Before making anyone redundant, run a redeployment process: +1. Skills inventory of all displaced employees +2. Match against all open roles in the new structure (not just their current function) +3. Assess with a "could succeed with 3-6 months of development" threshold, not "ready now" +4. Offer retraining or temporary assignments where the gap is closable +5. For those with no viable match, provide outplacement support, extended notice, and fair severance + +*Transition timeline for individuals:* + +Everyone should know within 2 weeks of the announcement: (1) whether their role exists in the new structure, (2) if it doesn't, what the process and timeline are, (3) when they'll have a final answer. Leaving people in limbo is the single most damaging thing you can do. + +**Risk mitigation:** + +Common transition risks and how to handle them: + +| Risk | Mitigation | +|---|---| +| Key talent leaving during uncertainty | Early, honest communication; retention arrangements for critical roles | +| Productivity dip during transition | Phase the change; don't reorganize everything simultaneously | +| Manager resistance to reduced scope | Involve them in design; provide alternative career paths | +| Cultural clash in merged teams | Invest in team-building; don't assume culture just happens | +| Loss of institutional knowledge | Document critical processes; use overlap periods for knowledge transfer | + +**Success metrics:** + +Measure whether the redesign is working: + +- Role clarity scores (survey-based, within 3 months) +- Decision speed (time from request to decision, before and after) +- Employee engagement (within 6 months) +- Cross-functional collaboration effectiveness +- Process efficiency improvements +- Leadership effectiveness ratings + +### Step 6: Systems and Process Redesign + +Structure without process and system changes is just new names on old ways of working. Identify every process and system that assumes the old structure. + +**Critical process changes:** + +| Process Area | What Breaks in the Old Model | What to Redesign | +|---|---|---| +| Financial reporting / P&L | Revenue and cost allocations don't map to new units | Rebuild P&L structure, reallocate shared costs, update transfer pricing | +| CRM and sales processes | Territories, account ownership, opportunity routing all assume old structure | Reassign accounts, update routing rules, retrain on new coverage model | +| Performance management | Goals, review cycles, and calibration groups reflect old reporting | Reset goals to new mandate within 30 days; recalibrate at 6 months | +| Budgeting and planning | Budget owners, approval authorities, and cost centers are wrong | Recut budgets to new structure; this often takes 2-3 months and needs finance partnership | +| HR systems | Job codes, org hierarchy, compensation bands need updating | Batch update; coordinate with payroll and benefits | +| Communication channels | Distribution lists, Slack channels, SharePoint sites, meeting invites | Update within first week; stale channels perpetuate old structure | + +**Sequencing process changes:** + +Not everything can change on Day 1. Prioritize: +1. **Week 1**: Communication channels, reporting lines, meeting cadences +2. **Weeks 2-4**: CRM and account ownership, performance goals +3. **Months 1-3**: Financial reporting, budgets, HR systems +4. **Months 3-6**: Full P&L restructuring, incentive plan changes + +The gap between "new structure announced" and "systems actually reflect new structure" is where most pain lives. Minimize it. + +### Step 7: Establish Governance + +The new organization needs governance to operate, not just an org chart. + +**Decision rights framework:** + +For key decision categories, define who: +- **Decides**: Has final authority +- **Approves**: Must sign off before execution +- **Recommends**: Provides input that shapes the decision +- **Informs**: Needs to know the outcome +- **Executes**: Carries out the decision + +(This is RAPID or RACI by another name. The framework matters less than actually clarifying who does what.) + +**Review and adaptation:** + +Org design is not a one-time event. Build in structural review checkpoints: +- 90-day post-implementation review +- 6-month effectiveness assessment +- Annual strategic alignment check + +--- + +## Key Principles + +- Structure follows strategy. If the strategy changes, the structure probably needs to change too. +- Every structural choice is a trade-off. Centralizing gains efficiency but loses responsiveness. Decentralizing gains speed but risks inconsistency. Name the trade-off explicitly. +- Org design is about work, not people. Design the structure for the work that needs to happen, then fit people to roles. +- Spans of control should be driven by the nature of work, not by status or seniority. +- The informal organization is as real as the formal one. Design with awareness of how work actually flows. +- Org design affects people deeply. Communicate honestly, treat people with respect, and don't pretend difficult changes are painless. +- Simple structures outperform complex ones. If you need a 20-page document to explain how the matrix works, the matrix doesn't work. +- Design for the next 3-5 years, not for today's problems. But don't design for a future that may never arrive. +- Incentives drive behavior more reliably than structure does. If the incentives contradict the structure, the incentives win. +- Build in mechanisms for adaptation. The organization that can restructure quickly beats the one with the perfect structure. +- Culture eats structure for breakfast. A technically optimal design that violates how people actually work and decide will be routed around, not adopted. +- When the leader has already decided, your value is in stress-testing, refining, and planning the transition... not in producing a predetermined answer or fighting a losing battle. diff --git a/management-consulting/skills/process-excellence/SKILL.md b/management-consulting/skills/process-excellence/SKILL.md new file mode 100644 index 00000000..554531e1 --- /dev/null +++ b/management-consulting/skills/process-excellence/SKILL.md @@ -0,0 +1,436 @@ +--- +name: process-excellence +description: Apply Lean Six Sigma methodology for process improvement and operational excellence. Covers the full DMAIC cycle (Define, Measure, Analyze, Improve, Control), value stream mapping, process mining, waste identification, root cause analysis, and sustained performance control. Use when analyzing business processes, identifying inefficiencies, or designing improved workflows. +--- + +# Process Excellence + +Apply Lean Six Sigma methodology to analyze, improve, and control business processes. Lean eliminates waste and improves flow. Six Sigma reduces variation and defects. Together they drive both efficiency and effectiveness. + +## Before You Begin + +Process improvement requires real process data. Ask for it rather than building on assumptions: +- What process is in scope, and what are its boundaries (start point, end point)? +- What metrics exist today (cycle time, error rates, volumes, cost per transaction)? +- What has been tried before to improve this process? +- Present industry benchmarks as reference points for comparison, not as targets for the user's specific process. When using numbers the user hasn't provided, flag them: "I'm using the median industry benchmark of X days for [process type]. How does your actual cycle time compare?" + +## Lean vs. Six Sigma: When to Use Which + +The two methodologies solve different problems. Choosing the wrong lens wastes time. + +| Dimension | Lean Focus | Six Sigma Focus | +|-----------|-----------|----------------| +| Core problem | Waste and flow | Variation and defects | +| Primary question | "Why does this take so long?" | "Why is the output inconsistent?" | +| Typical symptoms | Long cycle times, excess WIP, many handoffs, waiting | High defect rates, rework, unpredictable output quality | +| Key tools | Value stream mapping, 5S, kanban, pull systems | Control charts, hypothesis testing, DPMO, Cp/Cpk | +| Best for | Service processes, order fulfillment, onboarding, approval chains | Manufacturing quality, transaction accuracy, compliance processes | + +**Use Lean when** the process takes too long, costs too much, or has too many steps. The problem is waste and flow. + +**Use Six Sigma when** the process produces inconsistent results, high defect rates, or unpredictable output. The problem is variation. + +**Use both when** (most common in practice) you have flow problems AND quality problems. Start with Lean to simplify, then apply Six Sigma to stabilize what remains. + +## DMAIC Methodology + +DMAIC (Define, Measure, Analyze, Improve, Control) is the structured approach for improving existing processes. Each phase has specific deliverables and gate criteria before moving to the next. + +### Define + +Clearly articulate the improvement opportunity before jumping to solutions. + +**Problem statement**: Quantify the current problem in terms of cycle time, cost, quality, or customer satisfaction. A vague problem statement produces vague solutions. + +**Scope**: Define what's in scope and what's not. Process improvement projects that try to fix everything fix nothing. + +**Success criteria**: Establish measurable targets for each key metric. Include current state, target state, and the percentage improvement expected. + +**Project charter elements**: +- Problem statement with quantified impact +- Process boundaries (start point, end point) +- Success metrics with baselines and targets +- Timeline for each DMAIC phase +- Team: process owner, sponsor, project lead, team members + +**Project charter template**: + +``` +PROJECT CHARTER: [Process Name] Improvement + +Problem Statement: +[Specific, quantified description of the problem] +Example: "Order-to-cash cycle time averages 45 days vs. industry benchmark +of 15 days, trapping ~$20M in working capital and generating $2.1M/year in +excess financing costs." + +Scope: + In scope: [Start point] to [End point] + Out of scope: [Explicitly excluded elements] + +Success Metrics: +| Metric | Baseline | Target | Improvement | +|------------------|----------|----------|-------------| +| Cycle time | 45 days | 20 days | -56% | +| DSO | 62 days | 38 days | -39% | +| First pass yield | 72% | 95% | +32% | + +Timeline: + Define: Weeks 1-2 Measure: Weeks 3-4 Analyze: Weeks 5-6 + Improve: Weeks 7-10 Control: Weeks 11-12 + +Team: + Sponsor: [Name, Title] + Process Owner: [Name, Title] + Project Lead: [Name, Title] + Team Members: [Names] +``` + +**Digital context**: Assess automation potential, current level of process digitization, data availability, and RPA opportunity. This shapes the analysis approach and solution space. + +### Measure + +Collect baseline data and map the process as it actually operates (not as documentation says it should). + +**Process mapping**: Create a SIPOC diagram (Suppliers, Inputs, Process, Outputs, Customers) to establish boundaries, then map the detailed process flow with decision points and handoffs. + +**Baseline metrics**: For each key metric, document the measurement method, sample size, current baseline, and sigma level. + +**Sigma level calculation**: Sigma level translates defect rates into a universal quality metric. Here's how to calculate it: + +``` +Step 1: Define what constitutes a "defect" (any output not meeting specification) +Step 2: Count defects and opportunities + + DPMO = (Number of Defects / Total Opportunities) x 1,000,000 + +Step 3: Convert DPMO to sigma level: + + | DPMO | Sigma Level | Yield | Plain English | + |-----------|-------------|---------|---------------------------| + | 691,462 | 1.0 sigma | 30.9% | Barely functioning | + | 308,538 | 2.0 sigma | 69.1% | Poor | + | 66,807 | 3.0 sigma | 93.3% | Average | + | 6,210 | 4.0 sigma | 99.4% | Good | + | 233 | 5.0 sigma | 99.98% | Very good | + | 3.4 | 6.0 sigma | 99.9997%| World class | + +Example: Invoice processing + - 10,000 invoices processed per month + - 450 contain errors (wrong amount, wrong address, missing PO) + - Each invoice has 4 opportunities for error + - DPMO = (450 / 40,000) x 1,000,000 = 11,250 + - Sigma level: ~3.8 (between 3.0 and 4.0) + - Target: 4.0 sigma (6,210 DPMO = ~248 errors/month) +``` + +**Data collection plan**: Specify each data point, its operational definition, collection method, frequency, and responsible person. Ambiguous definitions produce unreliable data. + +**Process capability analysis**: +- Cp/Cpk indices (process capability relative to specifications) +- Process stability assessment (in control vs. out of control) +- Common cause vs. special cause variation + +### Analyze + +Identify and validate root causes of process variation and waste. This is where discipline matters most. Don't skip to solutions. + +**Waste identification (TIMWOODS)**: + +| Waste Type | What to Look For | +|------------|-----------------| +| Transportation | Unnecessary movement of materials or information between steps | +| Inventory | Work piling up between process steps, excess WIP | +| Motion | Unnecessary movement of people (extra clicks, walking, searching) | +| Waiting | People or work idle, waiting for approvals, inputs, or capacity | +| Overproduction | Producing more, sooner, or faster than the next step requires | +| Overprocessing | Doing more work than the customer requires or values | +| Defects | Errors requiring rework, correction, or scrapping | +| Skills (underutilized) | People doing work below their capability, untapped expertise | + +**Service industry TIMWOODS examples** (most waste literature skews manufacturing; here's what each looks like in services): + +| Waste Type | Service Industry Example | +|------------|------------------------| +| Transportation | Customer data re-entered across 3 systems because they don't integrate; loan application forwarded through 5 departments | +| Inventory | 2,000 unprocessed insurance claims in a queue; 500 unreviewed job applications backlogging HR | +| Motion | Agent toggling between 8 browser tabs to resolve one support ticket; nurse walking to a different floor for supplies | +| Waiting | Customer on hold 12 minutes for a transfer; contract sitting 9 days in legal review queue | +| Overproduction | Generating monthly reports nobody reads; pre-populating 50 onboarding forms when only 20 are needed | +| Overprocessing | Four levels of approval for a $200 purchase; manually formatting data that will be reformatted downstream | +| Defects | Incorrect invoice sent to client requiring credit note; wrong patient information on a lab order | +| Skills | Senior analyst doing data entry; licensed clinician handling scheduling tasks | + +**Root cause tools**: + +5 Whys: Start with the problem statement and ask "why" iteratively until you reach a root cause you can act on. Typically 3-5 iterations. The root cause should be something the team can influence. + +Fishbone (Ishikawa) diagram: Organize potential causes into categories (People, Process, Machine, Material, Environment, Measurement). Useful for brainstorming with the team and ensuring you haven't missed a category. + +**Hypothesis testing**: For each suspected root cause, define a hypothesis, test method, and acceptance criteria. Validate with data, not intuition. + +**Pareto analysis**: Rank root causes by their contribution to the problem. Typically 20% of causes drive 80% of the impact. Address the vital few, not the trivial many. + +### Improve + +Design and deploy optimized process solutions. + +**Solution evaluation matrix**: Score each potential solution on impact, effort, cost, and risk. Separate high-impact/low-effort wins from longer-term structural changes. Impact estimates require baseline process data from the Measure phase. If the matrix is built before Measure data is available, leave impact scores blank and flag them as pending. After populating with actuals, re-score the matrix and use it to drive the pilot selection conversation with the process owner. + +**Pilot before rollout**: Test improvements in a controlled setting before full implementation. Measure pilot results against baseline and target. A solution that works in theory but fails in practice isn't a solution. + +**Robotic Process Automation (RPA)**: For high-volume, rule-based, repetitive tasks identified during analysis, RPA is a solution pattern worth evaluating before redesigning the process itself. + +RPA is a good fit when: +- The task is rule-based with clear decision logic (no judgment calls) +- Volume is high enough to justify bot development and maintenance +- The underlying systems lack APIs or integration options +- The process is stable (frequent process changes break bots) + +RPA is a poor fit when: +- The process itself is broken (automating waste just produces waste faster) +- Inputs are unstructured or highly variable +- The process requires human judgment or exception handling for most cases +- A system integration or API would solve the problem more durably + +**Common RPA candidates from process analysis**: invoice data extraction, employee onboarding form population, report generation and distribution, order status checking across systems, reconciliation between two data sources. + +**Rule of thumb**: Fix the process first (eliminate waste, reduce variation), then automate what remains. RPA on top of a bad process locks in the bad process. + +**Implementation planning**: Phase the rollout with clear activities, owners, timelines, and dependencies. Include training and communication alongside the technical changes. + +**Full rollout plan**: +- Training for all affected staff +- Communication to stakeholders +- Phased schedule with go/no-go checkpoints +- Support model during transition + +### Control + +Ensure improvements are sustained. Without control mechanisms, processes revert to their pre-improvement state within months. + +**Control plan**: For each critical process output, define the measurement, control method (control chart, checklist, automated alert), monitoring frequency, and response plan for out-of-control conditions. + +**Control charts**: Select the appropriate chart type based on data characteristics: +- X-bar R / X-bar S: variable data, subgroups +- I-MR: variable data, individual measurements +- P-chart: proportion defective +- C-chart: count of defects + +Define Upper Control Limit (UCL), Lower Control Limit (LCL), and Center Line (CL). + +**Standard work documentation**: Update process flows, work instructions, SOPs, and training materials to reflect the improved process. If it's not documented, it will drift. + +**Response protocol**: Define specific triggers and their required responses. When metric X exceeds threshold Y, person Z takes action A within timeframe T. + +**Handover checklist**: +- Control charts deployed and understood +- Response plan documented and tested +- Process owner trained on monitoring +- SOPs updated and accessible +- Training completed for all operators +- Dashboard live and accurate +- Lessons learned documented + +## Value Stream Mapping + +### Building a Value Stream Map + +Value stream mapping visualizes the end-to-end flow of materials and information required to deliver a product or service. + +**Steps to map current state**: +1. Walk the process from customer back to supplier +2. Record cycle time, changeover time, and WIP at each step +3. Separate value-added from non-value-added time +4. Identify bottlenecks (highest cycle time relative to takt time) +5. Mark inventory accumulation points +6. Calculate total lead time vs. value-added time ratio + +**Key metrics per step**: Cycle time (C/T), work in progress (WIP), changeover time, uptime, batch size. + +**Summary metrics**: Total lead time, total value-added time, percentage value-added. Value-added time is typically a small fraction of total lead time in service processes. The exact ratio varies, but it's often surprisingly low. The gap is the improvement opportunity. + +### Text-Based Value Stream Map Example + +``` +ORDER-TO-CASH VALUE STREAM MAP (Current State) + +Customer Customer + Order Payment + │ ▲ + ▼ │ +┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ +│ Order │ │ Credit │ │ Order │ │ Shipping │ │ Invoice │ +│ Entry │───▶│ Check │───▶│ Fulfill │───▶│ │───▶│ & Collect│ +│ │ │ │ │ │ │ │ │ │ +│ C/T: 15m │ │ C/T: 2d │ │ C/T: 3d │ │ C/T: 2d │ │ C/T: 5m │ +│ WIP: 50 │ │ WIP: 200 │ │ WIP: 150 │ │ WIP: 75 │ │ WIP: 300 │ +│ FPY: 85% │ │ FPY: 70% │ │ FPY: 92% │ │ FPY: 95% │ │ FPY: 80% │ +└──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘ + │ ▲ │ ▲ │ ▲ │ │ ▲ + └──┘ └──┘ └──┘ │ └──┘ + Rework Rework Rework │ Rework + (15%) (30%) (8%) │ (20%) + │ + Wait: 5d between + fulfill & ship + +TIMELINE: +├─ 15m ─┤─── 2d ───┤─── 3d ───┤── 5d wait ──┤── 2d ──┤── 5m ──┤ + ├── 8d wait ──┤ + +Total Lead Time: 45 days (including rework loops and queue time) +Value-Added Time: 7 days, 20 minutes +% Value-Added: 15.6% + +BOTTLENECK: Credit Check (longest queue, lowest FPY, highest rework) +QUICK WINS: Automate credit check for existing customers (<$50K) + Combine order entry + invoice generation +``` + +**Future state design**: Design for continuous flow where possible. Eliminate steps that don't add value. Pull work through the system rather than pushing it. The future state should show reduced lead time, lower WIP, and higher value-added percentage. + +## Process Mining + +Process mining uses event log data from IT systems to discover, monitor, and improve actual processes. It is most valuable for high-volume transactional processes where the gap between "how we think it works" and "how it actually works" is large. + +### When to Use Process Mining + +- Large-scale processes with many variants and exceptions +- ERP or workflow system logs available for extraction +- Need to discover actual process behavior vs. documented process +- Conformance checking against regulatory or policy requirements +- Identifying automation candidates from high-volume, low-variation paths + +### Data Requirements + +Process mining requires event logs with three minimum fields: **Case ID** (unique identifier for each process instance, e.g., order number), **Activity** (what happened, e.g., "Create Purchase Order"), and **Timestamp** (when it happened). Additional fields like resource, cost, and department enrich the analysis. + +**Common ERP extraction sources:** +- SAP: TSTCT (transaction codes), CDHDR/CDPOS (change documents), BKPF/BSEG (accounting documents) +- Oracle: workflow tables, audit trail tables +- Salesforce: activity history, case history + +### Process Mining Analysis + +**Discovery metrics**: Number of process path variants discovered, average case duration, deviation points where the process diverges from the intended flow. + +**Conformance analysis**: Compliance rate, number and percentage of deviating cases, root causes of deviation (are deviations intentional workarounds or genuine errors?). + +**Process intelligence findings**: Identify bottlenecks (where cases spend the most time), rework loops (where cases go backwards), and excessive wait times (where cases sit idle). + +### Process Mining Tool Selection + +The tool landscape is maturing rapidly. Selection depends on your data sources, scale, and whether you need one-time analysis or continuous monitoring. + +| Tool | Strength | Best For | +|------|----------|----------| +| Celonis | Deep SAP integration, enterprise-grade, real-time process intelligence | Large enterprises with SAP/Oracle; continuous process monitoring; organizations wanting execution management beyond just mining | +| UiPath Process Mining | Tight integration with UiPath RPA platform | Organizations already using UiPath for automation; projects where mining feeds directly into RPA bot development | +| Disco (Fluxicon) | Simple, fast, excellent visualization | One-time or periodic analysis; consulting engagements where speed matters; teams new to process mining | +| Microsoft Power Automate Process Mining | Integrated into Microsoft 365 ecosystem | Organizations heavily invested in Microsoft stack; lower-budget entry point | +| QPR ProcessAnalyzer | Strong conformance checking, compliance focus | Regulated industries; audit and compliance use cases | + +**Practical guidance**: For consulting engagements, start with Disco for quick discovery (hours to first insight). Recommend Celonis or UiPath for clients who need ongoing operational monitoring. The tool matters less than the quality of your event log extraction and your ability to translate process maps into actionable findings. + +## Industry Benchmarks + +Reference benchmarks for common business processes. Use these to size the improvement opportunity and set realistic targets. + +### Order-to-Cash (O2C) + +| Metric | Bottom Quartile | Median | Top Quartile | World Class | +|--------|----------------|--------|--------------|-------------| +| Cycle time (order to payment) | >45 days | 25-35 days | 15-20 days | <10 days | +| DSO (Days Sales Outstanding) | >60 days | 40-50 days | 30-35 days | <25 days | +| Invoice accuracy | <85% | 90-95% | 97-99% | >99.5% | +| Cost per invoice | >$15 | $8-12 | $4-6 | <$3 | +| Touchless order rate | <20% | 40-55% | 65-80% | >85% | + +### Procure-to-Pay (P2P) + +| Metric | Bottom Quartile | Median | Top Quartile | World Class | +|--------|----------------|--------|--------------|-------------| +| PO cycle time | >5 days | 2-3 days | <1 day | Same day | +| Invoice processing time | >15 days | 7-10 days | 3-5 days | <2 days | +| % invoices matched automatically | <30% | 50-65% | 75-85% | >90% | +| Cost per PO | >$35 | $15-25 | $8-12 | <$5 | +| Maverick spend (off-contract) | >40% | 20-30% | 10-15% | <5% | + +### Record-to-Report (R2R) + +| Metric | Bottom Quartile | Median | Top Quartile | World Class | +|--------|----------------|--------|--------------|-------------| +| Days to close (monthly) | >10 days | 6-8 days | 4-5 days | <3 days | +| Days to close (quarterly) | >15 days | 10-12 days | 6-8 days | <5 days | +| Journal entry error rate | >5% | 2-3% | <1% | <0.5% | +| % automated reconciliations | <20% | 40-55% | 70-85% | >90% | +| Finance FTE per $B revenue | >50 | 35-45 | 25-30 | <20 | + +These benchmarks are directional. Actual performance depends on industry, complexity, geography, and ERP maturity. Use them to frame the conversation ("we're in the bottom quartile on X"), not as absolute targets. + +## SIPOC Analysis + +SIPOC establishes process boundaries before detailed mapping. It answers: Who supplies what, through which high-level steps, producing what outputs, for whom? + +| Element | Question | +|---------|----------| +| **S**uppliers | Who provides inputs to this process? | +| **I**nputs | What enters the process (materials, information, triggers)? | +| **P**rocess | What are the 5-7 high-level steps? | +| **O**utputs | What does the process produce? | +| **C**ustomers | Who receives the outputs? | + +Use SIPOC in the Define phase to align the team on process scope. It prevents scope creep and ensures you're mapping the right process. + +## Standard Work + +Standard work documents the current best method for performing a process. It is the baseline for improvement, not the ceiling. + +**Standard work elements**: +- Takt time: customer demand rate (available time / customer demand) +- Cycle time: time to complete one unit (must be less than or equal to takt time) +- WIP limit: maximum work in progress allowed +- Work sequence: steps in order with time and quality checks at each step + +**Principles**: +- Takt time sets the pace. If cycle time exceeds takt time at any station, that station is a bottleneck +- WIP limits prevent overburden and expose bottlenecks. If you limit WIP and work piles up, you've found the constraint +- Document the current best method, then improve from there. You can only improve what is standardized + +## Process Performance Dashboard + +Track three categories of metrics to monitor process health. + +**Efficiency metrics**: Cycle time, throughput, utilization. These tell you how fast and how productively the process operates. + +**Quality metrics**: Defect rate, first pass yield (FPY), customer complaints. These tell you whether the process produces acceptable output. + +**Cost metrics**: Cost per unit, scrap cost, rework cost. These translate process performance into financial impact. + +Use RAG status (Green = on target, Yellow = at risk, Red = off target) and track baseline, current, and target values to show direction of travel. + +## Financial Impact + +Every process improvement should be translated into financial terms. + +| Category | How to Calculate | +|----------|-----------------| +| Cost savings (annual) | Reduction in labor, materials, rework, waste | +| Revenue impact | Increased throughput, reduced lead time enabling more sales | +| One-time implementation cost | Training, systems, consulting, pilot costs | +| ROI | (Annual savings - Implementation cost) / Implementation cost | +| Payback period | Implementation cost / Monthly savings | + +## Principles + +- Start with data. Never assume. Measure current state before proposing improvements +- Value is defined by the customer, not by internal convenience +- Eliminate waste first, then optimize what remains +- Standardize before improving. You can only improve what is standardized +- Changes in one part of a process affect other parts. Think systemically +- Engage the people doing the work. They know the process best and they'll be implementing the changes +- Pilot before rollout. Test improvements before full implementation +- Control to sustain. Improvements without control mechanisms revert within months +- If you're not measuring, you're not improving. And if your measurements are wrong, you're improving the wrong thing diff --git a/management-consulting/skills/project-governance/SKILL.md b/management-consulting/skills/project-governance/SKILL.md new file mode 100644 index 00000000..5207b3da --- /dev/null +++ b/management-consulting/skills/project-governance/SKILL.md @@ -0,0 +1,536 @@ +--- +name: project-governance +description: Establish and manage project governance for consulting engagements. Covers governance structure selection, RACI matrices, decision rights, steering committees, stage gates, status reporting (RAG), risk registers, issue management, and project closure. Use when setting up engagement governance, defining accountability, tracking project health, or managing risks. +--- + +# Project Governance + +Establish and operate governance structures that ensure clear decision-making, accountability, and stakeholder alignment throughout a consulting engagement. This covers the full project management lifecycle: from initial governance setup through ongoing status reporting, risk management, and project closure. + +## Before You Begin + +Governance should be right-sized to the engagement. Confirm the basics: +- What is the project size and complexity (team size, duration, number of workstreams)? +- What governance structures does the client already use? +- Who are the key decision-makers and what is the reporting culture? +- Don't fabricate stakeholder names, RACI assignments, or risk register entries. These require real project context. Ask what exists and build from there. + +## Governance Structure Selection + +Right-size governance to project complexity. A 3-person engagement doesn't need the same governance as a 50-person transformation. + +### Light Governance (small project, trusted relationship) +- Project sponsor + engagement manager (EM) +- Weekly check-ins +- Minimal formal documentation +- Suitable for: focused advisory work, small team, established client relationship + +### Standard Governance (most projects) +- Steering committee + working team +- Monthly steering committee meetings +- Bi-weekly working sessions +- Written status reports +- Suitable for: typical consulting engagements with clear scope and moderate complexity + +### Complex Governance (large transformation, multiple workstreams) +- Executive sponsor + program director +- Steering committee + project boards per workstream +- Weekly program review +- Multiple workstream leads with defined interfaces +- Project Management Office (PMO) for coordination, reporting, and standards +- Suitable for: enterprise transformations, multi-year programs, high-risk initiatives + +**Selection factors**: project size, number of stakeholders, risk level, organizational culture, regulatory requirements. + +## Project Charter + +The charter is the foundational document. Get sign-off before substantive work begins. + +**Charter elements**: +- Project name, sponsor, project director, engagement manager (EM) +- Start date and target end date +- Problem statement: what problem does this solve? +- Expected outcomes: what will this achieve? +- Strategic alignment: how does this support organizational strategy? +- Scope: what's in, what's out, what assumptions are we making, what constraints apply +- Success criteria: specific, measurable targets +- Key milestones with target dates and dependencies +- Budget by category with tracking columns (budget, spent, remaining) +- Sign-off section for sponsor and project director + +## Stakeholder Mapping + +Map stakeholders early. The power/interest grid determines your engagement strategy for each person. + +### Power/Interest Grid + +``` + HIGH POWER + │ + ┌───────────────────┼───────────────────┐ + │ │ │ + │ KEEP SATISFIED │ MANAGE CLOSELY │ + │ │ │ + │ Regular updates │ Active engagement│ + │ Address concerns │ Joint decisions │ + │ Don't surprise │ Regular 1:1s │ + │ │ │ +────┼───────────────────┼───────────────────┼──── + │ │ │ HIGH +LOW │ │ │ INTEREST +INTEREST │ │ + │ MONITOR │ KEEP INFORMED │ + │ │ │ + │ Minimal effort │ Regular comms │ + │ Watch for shifts │ Invite input │ + │ in interest │ Build advocates │ + │ │ │ + └───────────────────┼───────────────────┘ + │ + LOW POWER +``` + +### Stakeholder Register + +For each stakeholder, document: + +| Field | Description | +|-------|-------------| +| Name and role | Title and organizational position | +| Grid position | Power/interest quadrant | +| Stance | Champion / Supporter / Neutral / Skeptic / Opponent | +| Key concerns | What they care about, what keeps them up at night | +| Influence on | Which decisions or workstreams they affect | +| Engagement approach | How you'll manage the relationship | +| Owner | Who on the team owns this relationship | + +Review the stakeholder map at each phase gate. Power and interest shift as projects progress. + +## RACI Matrix + +RACI clarifies who does what. The single most important rule: one Accountable person per activity. Multiple "A"s means nobody is accountable. + +**RACI definitions**: +- **R** (Responsible): Does the work +- **A** (Accountable): Final decision authority, owns the outcome +- **C** (Consulted): Provides input before the work is done +- **I** (Informed): Kept updated after the work is done + +Build the RACI by project phase. A typical consulting engagement has phases like: + +**Discovery**: Conduct interviews (R: engagement manager, A: project lead), gather data, synthesize findings, review current state. + +**Analysis**: Apply analytical frameworks, build financial models, develop options. + +**Recommendations**: Develop strategy, build business case, prepare executive presentation. + +For each activity, assign exactly one A. The person doing the work (R) and the person accountable for the outcome (A) can be the same person on small teams but should be separated on larger engagements. + +## Decision Rights + +Clarify decision authority before issues arise. Verbal agreements fade. + +| Decision Type | Decider | Input Required | Process | +|---------------|---------|----------------|---------| +| Scope changes | Sponsor | EM, client lead | Change request | +| Methodology | Engagement manager | Team | Team decision | +| Deliverable content | Engagement manager | Client lead | Review and approve | +| Timeline adjustments | Engagement manager | Sponsor | Notification | +| Budget reallocation | Sponsor | EM | Approval required | +| Resource changes | Engagement manager | HR/PMO | Coordination | +| Go/no-go on recommendations | Engagement manager | Team | Team consensus | + +### Escalation Path + +| Issue Type | First Escalation | Second Escalation | Timeline | +|------------|------------------|-------------------|----------| +| Technical | EM to client lead | Sponsor | 48 hours | +| Schedule | EM to sponsor | Steering committee | 24 hours | +| Budget | EM to sponsor | Finance | 24 hours | +| Strategic | Sponsor to steering committee | Board | Immediate | + +Escalate early, not late. Surprises destroy trust faster than bad news delivered promptly. + +## Communication Plan + +A communication plan goes beyond meeting cadence. It defines who gets what information, through what channel, at what frequency, and who owns the communication. + +### Communication Matrix + +| Audience | Message Type | Channel | Frequency | Owner | Notes | +|----------|-------------|---------|-----------|-------|-------| +| Sponsor | Project health, decisions needed | 1:1 meeting + written brief | Weekly | EM | Keep to 15 min; pre-read sent day before | +| Steering committee | Strategic progress, major risks, gate decisions | Formal meeting + deck | Monthly | EM / Project Director | 2-page status report pre-circulated | +| Working team | Task coordination, blockers, priorities | Stand-up / team meeting | Weekly | Team lead | Action-oriented; 30 min max | +| Extended stakeholders | Progress summary, upcoming changes | Email update / newsletter | Bi-weekly | PMO / Comms lead | Consistent format; RAG dashboard | +| Affected business units | Impact on their operations, what's changing | Briefing / town hall | As needed (at milestones) | Change lead | Tailor message to audience concerns | +| Executive leadership (not on Steering Committee) | High-level progress, strategic alignment | Executive summary email | Monthly | Sponsor | 3 sentences max; sponsor sends from their name | + +### Communication Principles +- Match the channel to the message. Bad news goes in person (or live video), not email +- Every communication should answer: "What changed? What does it mean? What do you need to do?" +- Over-communicate during transitions and go-live periods +- Keep formats consistent. Stakeholders should know exactly where to look for what +- Test understanding: if key stakeholders can't articulate the project's purpose, your communication is failing + +## Meeting Cadence + +### Steering Committee +- **Frequency**: Monthly (or as governance tier dictates) +- **Duration**: 60-90 minutes +- **Attendees**: Sponsor, client executive, partner, engagement manager (EM) +- **Purpose**: Strategic direction, major decisions, risk review +- **Agenda structure**: Status overview (5 min), key decisions needed (15 min), deep dive topic (30 min), risks and issues (15 min), next steps (5 min) + +### Project Team +- **Frequency**: Weekly +- **Duration**: 60 minutes +- **Attendees**: Engagement manager, analysts, client lead +- **Purpose**: Work coordination, progress tracking, blocker resolution +- **Agenda structure**: Quick wins and blockers (10 min), workstream updates (30 min), decisions needed (10 min), next week planning (10 min) + +### Status Updates +- **Frequency**: Bi-weekly (written) +- **Distribution**: Extended stakeholders +- **Format**: Standardized report (see Status Reporting below) + +## Cross-Workstream Dependency Management + +For programs with multiple workstreams, unmanaged dependencies are the primary source of delay and rework. + +### Dependency Matrix + +Map dependencies between workstreams explicitly: + +``` + PROVIDING WORKSTREAM + WS1 WS2 WS3 WS4 +CONSUMING WS1 - D01 - D02 +WORKSTREAM WS2 - - D03 - + WS3 D04 - - - + WS4 - D05 D06 - +``` + +For each dependency (D01, D02, etc.), document: + +| Field | Description | +|-------|-------------| +| ID | Unique identifier | +| Description | What is being provided/consumed | +| Provider | Workstream and person responsible for delivering | +| Consumer | Workstream and person who needs it | +| Due date | When the dependency must be delivered | +| Status | On track / At risk / Blocked / Delivered | +| Impact if late | What happens to the consuming workstream | + +### Integration Management Practices +- **Integration lead**: Assign one person (not a workstream lead) to own cross-workstream coordination +- **Dependency review**: Weekly 30-minute session where workstream leads review the dependency matrix. Focus on upcoming 2-week window +- **Shared resources**: When workstreams share resources (people, environments, data), document the allocation and create a booking mechanism. "Shared" without governance means "contested" +- **Interface agreements**: For each major integration point, document: what is handed over, in what format, quality criteria, and the handshake process + +## Stage Gate Framework + +Stage gates provide formal checkpoints where the project must demonstrate readiness before proceeding. + +### Typical Consulting Stage Gates + +Gate timing shown below is indicative and should be calibrated to the actual program plan. Adjust based on program complexity, organizational decision-making speed, and external dependencies. A 6-week diagnostic and a 6-month transformation will have very different gate cadences. + +**Gate 1: Plan Approval** (end of planning phase) + +Required evidence: approved charter, completed RACI, detailed work plan, team assigned, approved budget, initial risk register. + +**Gate 2: Issue Review** (end of analysis phase) + +Required evidence: findings documented, options evaluated, draft recommendations clear, client aligned on direction. + +**Gate 3: Design Approval** (end of design phase) + +Required evidence: solution documented, business case validated with updated financials, implementation roadmap approved, change plan approved. + +**Gate 4: Go-Live Review** (end of implementation) + +Required evidence: deliverables accepted, benefits tracking in place, control plan operational, lessons learned captured. + +### Gate Decisions + +| Decision | Meaning | Action | +|----------|---------|--------| +| GO | Approved to proceed | Move to next stage | +| GO WITH CONDITIONS | Approved with specific modifications | Document conditions and track completion | +| REDO | Insufficient readiness | Address gaps and return to gate | +| STOP | Terminate project | Initiate closure procedures | + +## Status Reporting + +### RAG Methodology + +RAG (Red-Amber-Green) provides a standardized way to communicate project health. + +| Status | Definition | Action Required | +|--------|------------|-----------------| +| GREEN | On track, no significant issues | Continue normal monitoring | +| AMBER | Some concerns, mitigation in place | Monitor closely, escalate if worsens | +| RED | Critical issues, intervention needed | Immediate escalation, recovery plan | + +### Status Report Structure + +A status report should cover these sections, in this order. Keep it to 2 pages for steering committee consumption. + +**Executive summary**: 2-3 sentences. Overall status, key wins, key concerns. If a steering committee member reads only this section, they should know the state of the project. + +**Status dashboard**: RAG rating and trend (improving, stable, declining) for each dimension: schedule, budget, scope, quality, resources. + +**Progress this period**: Deliverables completed, key achievements, work in progress with completion percentages. + +**Milestone status**: Each milestone with target date, forecast date, status, and variance. Use symbols (achieved, at risk, missed) for quick scanning. + +**Budget status**: Total budget, spent to date, percentage spent vs. percentage complete, forecast at completion, variance. A project that is 50% through its budget but 30% complete has a problem. + +**Burn rate analysis**: Planned vs. actual spend by period. Diverging trends signal trouble before it shows up in the overall numbers. + +**Risks and issues**: Top risks with probability, impact, and mitigation. Open issues with severity, owner, and due date. Resolved issues from this period. + +**Forward look**: Next period priorities, upcoming milestones, decisions required, dependency awareness. + +**Governance**: Steering committee meeting notes, escalations, change log. + +### Status Reporting Discipline + +- Be honest with RAG status. Don't greenwash problems +- Escalate RED issues immediately. Don't wait for the next status report +- Quantify progress wherever possible. "Good progress" is not a status +- Focus on what changed since the last report +- Keep the executive summary to 3 sentences maximum +- Send reports at the same time each period. Consistency builds trust +- Track action items from previous reports explicitly +- Report outcomes achieved, not just tasks completed + +## Risk Management + +### Risk Identification + +Systematically identify risks using established categories. + +| Category | Scope | +|----------|-------| +| Strategic | Market changes, competitor actions, regulatory shifts | +| Financial | Cost overruns, currency fluctuation, funding uncertainty | +| Operational | Process failures, key person dependency, supply chain | +| Technical | Technology issues, integration problems, cybersecurity | +| Regulatory | Compliance requirements, legal exposure, data privacy | +| Schedule | Delays, dependencies, resource availability | +| Quality | Defects, scope creep, acceptance criteria disputes | + +**Identification techniques**: Team brainstorming, expert judgment, SWOT analysis, historical checklists from similar projects, root cause analysis working backwards from potential failures, horizon scanning for emerging risks. + +### Risk Assessment + +**Probability scale** (1-5): +- 5 (Very High): >80% likelihood +- 4 (High): 60-80% +- 3 (Medium): 40-60% +- 2 (Low): 20-40% +- 1 (Very Low): <20% + +**Impact scale** (1-5): +- 5 (Very High): >30% schedule delay, >20% cost overrun, major quality failures +- 4 (High): 15-30% delay, 10-20% overrun, significant quality issues +- 3 (Medium): 5-15% delay, 5-10% overrun, moderate quality issues +- 2 (Low): <5% delay, <5% overrun, minor quality issues +- 1 (Very Low): Minimal impact across all dimensions + +**Risk score**: Probability x Impact + +| Score Range | Classification | Required Action | +|-------------|---------------|-----------------| +| 19-25 | CRITICAL | Immediate action, steering committee visibility | +| 10-18 | HIGH | Priority mitigation, active management | +| 5-9 | MEDIUM | Active monitoring, mitigation plan in place | +| 1-4 | LOW | Accept and monitor | + +### Risk Response Strategies + +| Strategy | When to Use | +|----------|-------------| +| Avoid | Change the plan to eliminate the risk entirely. Use for high-impact, high-probability risks where avoidance is feasible | +| Mitigate | Reduce the probability or impact. Most common strategy. Define specific actions with owners and deadlines | +| Transfer | Shift risk to another party (insurance, outsourcing, contractual terms). Use when another party can manage the risk more effectively | +| Accept | Acknowledge and monitor. Use when cost of mitigation exceeds expected cost of the risk, or when probability is very low | + +For each risk above LOW, document: mitigation actions with owners and timelines, contingency plan if the risk materializes, cost of mitigation vs. cost of occurrence. + +### Risk Monitoring + +**Review cadence**: +- Weekly: project manager checks trigger indicators +- Monthly: full team review, identify new risks, update statuses +- Quarterly: steering committee deep dive on trends and strategic risks + +**Early warning indicators**: For each significant risk, define the signal that would indicate the risk is about to materialize, the monitoring method, and the monitoring frequency. + +**Risk trends**: Track total risks, high/critical risks, closed risks, and new risks over time. A rising count of high risks is itself a risk. + +## Issue Management + +Issues are risks that have materialized, or problems that need resolution. + +**Issue severity definitions**: +- **Critical**: Project cannot proceed without resolution. Requires immediate action +- **High**: Significant impact on project outcomes. Requires escalation +- **Medium**: Moderate impact. Needs attention within normal management cadence +- **Low**: Minor impact. Address in normal course of work + +Track each issue with: ID, description, severity, status (open/in progress/resolved), date created, owner, due date, and resolution. + +## Benefits Realization Tracking + +Benefits don't materialize at project close. They accrue over time and need active tracking. + +### Benefits Register + +| Field | Description | +|-------|-------------| +| Benefit ID | Unique identifier | +| Description | What benefit is expected | +| Category | Cost saving / Revenue increase / Risk reduction / Capability / Compliance | +| Owner | Person accountable for realizing this benefit | +| Baseline | Current state measurement | +| Target | Expected improved state | +| Measurement method | How you'll measure (data source, calculation, frequency) | +| Realization timeline | When the benefit is expected to materialize (often months after project close) | +| Status | Not started / Partially realized / Fully realized / At risk | + +### Tracking Discipline +- Measure benefits from project inception, not just at closure +- Baseline every benefit BEFORE changes are implemented (you can't prove improvement without a starting point) +- Separate "delivered" (the project produced the capability) from "realized" (the organization captured the value) +- Report benefits alongside project status at steering committee meetings +- For financial benefits, reconcile with Finance. A benefit the CFO doesn't recognize isn't a benefit +- Plan for benefits that lag: cost savings may appear in the first quarter, but revenue growth from a new capability may take 12-18 months + +## Hybrid Delivery Framework + +Many consulting engagements blend agile and waterfall approaches. This is pragmatic, not fashionable. + +**When to use which**: +- Waterfall for workstreams with clear requirements and sequential dependencies +- Agile for workstreams with evolving requirements or iterative design +- Hybrid when different workstreams have different characteristics + +### How Sprints Relate to Stage Gates + +Sprints operate within project phases. Each sprint delivers incremental progress toward phase objectives. Phase gates still apply at phase boundaries. + +``` +Phase: Analysis +├── Sprint 1: Data collection & initial analysis +├── Sprint 2: Deep-dive analysis & hypothesis testing +├── Sprint 3: Options development & evaluation +└── GATE 2: Issue Review ◄── Sprint outputs aggregated for gate evidence + +Phase: Design +├── Sprint 4: Solution design (core) +├── Sprint 5: Solution design (detail) + business case +├── Sprint 6: Implementation planning + change readiness +└── GATE 3: Design Approval ◄── Sprint outputs aggregated for gate evidence +``` + +**Aggregating agile metrics for waterfall reporting:** + +| Agile Metric | Waterfall Equivalent | How to Translate | +|-------------|---------------------|-----------------| +| Sprint velocity | % phase complete | Story points completed / total story points in phase | +| Sprint burndown | Milestone progress | Map sprint goals to phase milestones | +| Backlog size | Scope status | Total remaining effort vs. baseline estimate | +| Sprint retrospective findings | Risk/issue register items | Escalate systemic issues; log tactical ones | +| Definition of Done | Gate evidence | Sprint DoD items map to gate evidence requirements | + +### Hybrid Governance Practices +- Steering committee reviews at phase boundaries (waterfall cadence) +- Status reporting at sprint cadence (weekly) +- Scope management through backlog grooming (agile) with change control for phase-level scope changes (waterfall) +- Quality gates through Definition of Done (per sprint) feeding into stage gate evidence requirements + +## Project Closure + +Close properly. The last impression matters as much as the first. + +### Governance Closure Checklist + +From a governance perspective, closure requires these minimum actions: + +| Action | Owner | Evidence | +|--------|-------|----------| +| Final status report delivered | EM | Approved report with final RAG status | +| All deliverables accepted | Client lead | Signed acceptance forms or email confirmation | +| Steering committee sign-off | Sponsor | Meeting minutes with formal closure decision | +| Decision rights handed back | EM + Sponsor | Written confirmation of who now owns what | +| Open risks transferred | EM | Updated risk register with new owners in client org | +| Open issues resolved or transferred | EM | Issue log with resolution or transfer status | +| Benefits baseline established | EM + Client lead | Benefits register with baseline measurements | +| Lessons learned captured | Team | Document covering what worked, what didn't, what to do differently | +| Financial reconciliation | EM + Finance | Final budget vs. actual, explanation of variance | +| Team released | EM | Confirmed return dates, performance feedback completed | + +### Post-Closure Benefits Tracking + +Governance doesn't end at project close. Establish a 6-12 month benefits tracking cadence: +- Monthly benefits measurement for the first quarter post-close +- Quarterly measurement thereafter +- Assign a client-side benefits owner (not the consulting team) +- Schedule a 90-day benefits review meeting at project close + +For the full closure methodology (deliverable handover, knowledge transfer, transition planning, detailed financial reconciliation), see the `project-closeout` skill. + +## Governance Recovery Playbooks + +### When the Steering Committee Stops Meeting + +Steering committee attrition is one of the most common mid-engagement governance failures. The committee doesn't formally dissolve; it just stops showing up. Meetings get rescheduled, then cancelled, then forgotten. + +**Diagnosis**: This usually signals one of three things: (1) the project has lost executive priority (a bigger fire is burning), (2) the committee sees its role as complete (they approved the plan, now "you go execute"), or (3) the meetings aren't adding value (status reporting that could be an email). + +**Recovery steps**: + +1. **Don't keep scheduling and cancelling.** After two consecutive cancellations, treat it as a governance risk, not a scheduling problem. +2. **Diagnose the cause with the sponsor directly.** A 15-minute 1:1 conversation: "The steering committee hasn't met in 6 weeks. Is this a priority shift, or can we adjust the format?" Don't frame it as a complaint; frame it as a risk. +3. **If priority has shifted**: Get explicit confirmation of continued project sponsorship. If the project is still a go, negotiate a lighter governance model (sponsor-only monthly check-in, written decision memos instead of meetings). Document the change. +4. **If meetings aren't adding value**: Redesign the agenda. Drop status reporting (send it in advance). Make every meeting decision-focused: "We need your decision on X, Y, and Z." If there are no decisions needed, cancel the meeting intentionally rather than letting it atrophy. +5. **If the committee thinks its role is done**: Clarify the upcoming decisions that require their authority (budget reallocation, scope changes, go/no-go on implementation). Map these to the calendar so they can see when they'll be needed. +6. **Escalation**: If the sponsor confirms the project is still active but can't commit to any governance rhythm, document this as a formal risk in the status report. "Project proceeding without active steering committee oversight" gets attention. + +**The hard truth**: Sometimes a steering committee that stops meeting is telling you the project has been deprioritized. Listen to that signal. It's better to surface it early than to discover six months later that nobody was paying attention. + +### Governance Adaptation When the Client Org Changes Mid-Engagement + +Reorganizations, leadership changes, and strategy pivots mid-engagement are common, especially on longer programs. The governance structure you designed for the organization that existed at kickoff may no longer fit. + +**Triggers that require governance adaptation**: +- Sponsor leaves, is reassigned, or is promoted out of the role +- Organizational restructuring changes reporting lines for key stakeholders +- Merger, acquisition, or divestiture changes the organizational context +- New leadership arrives with different priorities or working style +- Budget ownership shifts to a different function + +**Response framework**: + +1. **Immediate (within 48 hours of the change)**: Confirm continued project sponsorship. Identify the new decision-maker. Get a meeting with them. Don't wait for someone to tell you; go find out. +2. **Within one week**: Re-validate the project charter with the new stakeholder landscape. Does the scope still make sense? Are the objectives still relevant? Document any shifts. +3. **Within two weeks**: Update the RACI, stakeholder map, and communication plan. Reintroduce the project to new stakeholders (they didn't live through the kickoff; don't assume context). Adjust the governance structure if the change warrants it (e.g., a new sponsor who prefers weekly 30-minute check-ins over monthly steering committees). +4. **Ongoing**: Expect a productivity dip during the transition. Budget 2-4 weeks of reduced velocity as new relationships form and decisions get re-validated. + +**Key risk**: A leadership change can be an opportunity for project opponents to relitigate scope, budget, or approach. Prepare a concise brief (2 pages max) that a new executive can absorb quickly: what are we doing, why, where are we, and what decisions are upcoming. Get ahead of the narrative. + +## Principles + +- Governance enables, it doesn't restrict. If governance isn't adding value, it's adding overhead. Redesign it +- One accountable person per decision. Multiple "A"s in a RACI means nobody is accountable +- Document explicitly. Verbal agreements fade, especially across organizational boundaries +- Escalate early, not late. Surprises destroy trust faster than bad news delivered promptly +- Living documents over shelf-ware. A RACI that isn't updated when roles change creates false confidence +- Right-size to complexity. Light governance for light projects, heavy governance only when the risk warrants it +- Get sign-off on decision rights before the first disagreement, not during it +- Be honest with RAG status. Greenwashing problems delays resolution and erodes credibility +- Close properly. The last impression matters as much as the first From 1f42c77e948fa55e45c86a2f149801e3bd00d79e Mon Sep 17 00:00:00 2001 From: Anot Date: Fri, 20 Mar 2026 10:42:38 -0400 Subject: [PATCH 2/3] refactor: Add skill boundary clarity to execution skills implementation-planning: Add cross-reference to project-governance for ongoing oversight vs plan construction. project-governance: Add cross-reference to implementation-planning for building plans vs running oversight. Co-Authored-By: Claude Opus 4.6 (1M context) --- management-consulting/skills/implementation-planning/SKILL.md | 2 +- management-consulting/skills/project-governance/SKILL.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/management-consulting/skills/implementation-planning/SKILL.md b/management-consulting/skills/implementation-planning/SKILL.md index 822a0f4e..caaf9449 100644 --- a/management-consulting/skills/implementation-planning/SKILL.md +++ b/management-consulting/skills/implementation-planning/SKILL.md @@ -5,7 +5,7 @@ description: Bridge strategy to execution for consulting engagements through opt # Implementation Planning -Translate strategic recommendations into concrete, funded, governed plans that organizations can actually execute. This covers four connected stages: generating and evaluating options, building the business case, designing the roadmap, and developing the implementation plan. +Translate strategic recommendations into concrete, funded, governed plans that organizations can actually execute. This covers translating strategy into executable plans across four connected stages: generating and evaluating options, building the business case, designing the roadmap, and developing the implementation plan. For the ongoing oversight structure (steering committees, status reporting, risk management), see project-governance. ## Before You Begin diff --git a/management-consulting/skills/project-governance/SKILL.md b/management-consulting/skills/project-governance/SKILL.md index 5207b3da..b08452d4 100644 --- a/management-consulting/skills/project-governance/SKILL.md +++ b/management-consulting/skills/project-governance/SKILL.md @@ -5,7 +5,7 @@ description: Establish and manage project governance for consulting engagements. # Project Governance -Establish and operate governance structures that ensure clear decision-making, accountability, and stakeholder alignment throughout a consulting engagement. This covers the full project management lifecycle: from initial governance setup through ongoing status reporting, risk management, and project closure. +Establish and operate the oversight structure for consulting engagements, ensuring clear decision-making, accountability, and stakeholder alignment throughout. For building the implementation plan itself (workstreams, phasing, business cases), see implementation-planning. This covers the full project management lifecycle: from initial governance setup through ongoing status reporting, risk management, and project closure. ## Before You Begin From 6a95bcca730ea350c244c8bdc3d725964522252a Mon Sep 17 00:00:00 2001 From: Anot Date: Fri, 20 Mar 2026 14:16:23 -0400 Subject: [PATCH 3/3] fix: Align due-diligence description with actual content Skill covers 6 DD types (commercial, operational, financial, strategic, technology, legal) but description only listed 3. Added financial, technology, integration planning, quality of earnings, and synergy triggers. Co-Authored-By: Claude Opus 4.6 (1M context) --- management-consulting/skills/due-diligence/SKILL.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/management-consulting/skills/due-diligence/SKILL.md b/management-consulting/skills/due-diligence/SKILL.md index d4e82d39..67569963 100644 --- a/management-consulting/skills/due-diligence/SKILL.md +++ b/management-consulting/skills/due-diligence/SKILL.md @@ -1,6 +1,6 @@ --- name: due-diligence -description: Conduct commercial, operational, and strategic due diligence for M&A, investment, or partnership decisions. Use when assessing acquisition targets, investment opportunities, vendor evaluations, or any situation requiring rigorous business assessment and risk identification. +description: Conduct commercial, operational, financial, strategic, and technology due diligence for M&A, investment, partnership, or vendor decisions. Use when assessing acquisition targets, performing quality of earnings analysis, evaluating working capital, reviewing technology and IP, planning post-merger integration, quantifying synergies, or any situation requiring rigorous business assessment and risk identification. --- # Due Diligence