impact OS

An operating system for accelerators and incubators to understand and improve their impact.

What is the impact OS?

The impact OS is a set of software, processes and playbooks developed by the team at Volta to help accelerators and incubators understand and improve the impact they provide to startups. It is a continuous improvement system designed to help coaches and advisors stay focused on human connection while easing the administrative burden of record keeping and KPI tracking.

Introduction

The Challenge

Accelerators and incubators face a fundamental tension: the most valuable work (one-to-one human connection between experienced coaches and ambitious founders) gets crowded out by administrative burden. Meanwhile, demonstrating impact to funding stakeholders requires evidence that's scattered across meeting notes, spreadsheets, and coach memories. Current methods force organizations to choose between delivering support and measuring it.

The impact OS Approach

impact OS resolves this tension through a founder-centric system of record that automates administrative work while building comprehensive evidence of impact. The system is built on three core principles:

  1. Founder Journey Mapping: Organize everything around how founders actually build companies through team development, market traction, and technology evolution (the three dimensions).
  2. Being Less Wrong Over Time: Help founders systematically improve their decision-making through leading/lagging indicator tracking, not just hold them accountable to hitting goals.
  3. Foundational Data Enables Abstraction: Capture granular data (observations, commitments, metrics, interactions) so you can report at whatever level each audience needs: individual company details for coaches, portfolio patterns for leadership, aggregate outcomes for funders.

What Makes It Work

The system applies atomic research principles, breaking insights into reusable "nuggets" that can be tagged, filtered, and aggregated. When coaches meet with founders:

  • Fireflies.ai automatically captures transcripts
  • AI extracts observations categorized by dimension (Team/Traction/Technology)
  • What would be 30-60 minutes of documentation becomes 5-10 minutes of review
  • Coaches focus on conversation quality, not note-taking

Every two months, founders submit delta-based updates (pre-loaded with previous data, only changing what's different). Custom AI agents analyze accumulated data monthly and quarterly, synthesizing hours of manual analysis into minutes of review.

The Performance Framework

At VOLTA, the target is clear: $0 to $1M ARR in 18 months with 20% month-over-month growth. Head coaches balance this quantitative benchmark with qualitative observations about learning velocity and coachability, making informed exceptions for teams showing strong trajectory even if current metrics lag.

Sprint commitments create an objective framework: teams commit to controllable activities (leading indicators), then evaluate whether those activities moved key metrics (lagging indicators). Over multiple sprints, the data shows whether teams are improving their ability to make strategic bets, the real measure of coaching effectiveness.

Who This Is For

This approach is not for organizations satisfied with current impact reporting methods. It requires:

  • Leadership genuinely dissatisfied with their ability to demonstrate impact
  • Coaches who have lived the founder journey and thrive on accountability
  • Cultural commitment to continuous improvement and transparency
  • Willingness to invest in building capability over 12-18 months

If your organization values process compliance over outcomes, or prefers logistics-focused delivery over strategic accountability, this approach will struggle.

What Success Looks Like

  • 90 days: Foundational data flowing (Fireflies transcripts, bi-monthly updates, interaction records)
  • 6 months: Coaches spending 50% less time on documentation, able to answer key questions with data
  • 12-18 months: Portfolio-wide patterns visible, stakeholder reports evidence-based, organizational learning accelerating

Most importantly: You're measuring not just whether companies succeed, but whether they're learning faster, and whether your program is learning faster about what actually accelerates company growth.

An Offer to Collaborate

At Volta, we built the impact OS because we needed to know whether our support was actually helping startups succeed. It's been a long evolution involving cultural shifts, coach buy-in, and continuous refinement. The rest of this document outlines how we approached it, the practical steps we took, and what we learned along the way.

If understanding the direct impact of your support is a priority, and you want to leverage what AI makes possible, we hope this is useful. We're learning as we go and would welcome conversations with other organizations working on similar challenges.

If this framework resonates with your organization's direction and you're interested in collaborating on the continuous evolution of these methods and tools, we'd welcome the conversation.

Start the Conversation
* * *

1. impact OS Guiding Principles

impact OS is built on a fundamental belief: the most valuable work happens in one-to-one human connection between coaches and founders. The system is designed around the founder journey, organizing impact measurement across the dimensions that matter most to company building, while eliminating the administrative burden that prevents coaches from focusing on what they do best: supporting people.

Design Principles

Founder-Centric Journey Mapping

impact OS organizes everything around understanding and supporting the founder journey. Rather than imposing program structures or administrative frameworks, the system maps to how founders actually build companies through team development, market traction, and technology evolution. This founder-centric lens ensures that every piece of data captured, every observation recorded, and every metric tracked serves to better understand where founders are in their journey and how best to support their next steps.

Designed for Human Connection, Not Administration

We intentionally hire experienced coaches who have been in the shoes of the founders they're supporting. These coaches understand the nuanced challenges of building a company because they've lived them. impact OS exists to maximize the time coaches spend in meaningful one-to-one conversation and minimize the time spent on paperwork. The system captures their work, amplifies their insights, and handles the administrative burden, but never attempts to substitute for the human judgment and connection that drives real impact.

Continuous Intake Over Cohorts

While many accelerators operate on fixed cohort schedules, impact OS is designed for continuous intake programs (though it can accommodate cohort models). Companies enter residency at various points in their journey and progress is assessed for each team the same way. This creates a more natural flow of support where resources can be allocated based on company needs rather than arbitrary timelines. Each company progresses at their own pace through a common framework of market-driven milestones.

Market Milestones as the North Star

At the heart of impact OS is a progression framework based on tangible market validation, not subjective assessments. Companies advance through clearly defined stages:

  • Value Baseline: Validated problem-solution fit
  • First Paying Customer: Initial market validation
  • X Paying Customers: A statistically relevant number of paying customers
  • 10X Paying Customers: Sufficient traction to prove a market for an ideal customer profile (ICP)
  • $1M ARR: Proven business model
  • $10M ARR: Scale-stage company

These milestones provide an objective, universally understood measure of progress. Whether a coach is reviewing a team's journey or a funding organization is evaluating portfolio impact, everyone speaks the same language: market traction.

Administrative Automation Through Intelligence

The system leverages AI not to make decisions, but to handle the tedious work that prevents coaches from being effective. When a coach meets with a company, simply inviting a Fireflies.ai note-taker to the meeting automatically:

  • Captures the full transcript
  • Generates detailed observations about company progress and challenges
  • Creates structured meeting summaries
  • Identifies potential support needs
  • Tracks commitment follow-through

This transforms what would have been 30 to 60 minutes of post-meeting documentation into an automated process that happens in the background. Coaches review and refine the AI-generated content, ensuring accuracy while reclaiming time for actual coaching.

Bi-Monthly Pulse: Structured Progress Tracking

Every two months, companies receive automated reminders to submit progress updates through dynamic forms. These updates collect crucial metrics across multiple dimensions:

  • Financial Health: Monthly recurring revenue, runway, burn rate, cash position
  • Growth Metrics: Customer count, revenue trajectory, team size
  • Investment Activity: Funding raised, investor relationships, capital structure
  • Team Evolution: Full-time staff, part-time contributors, organizational changes

This regular cadence creates a longitudinal dataset that enables pattern recognition. Are companies typically hitting their first customer milestone within three months or six? Do teams with longer runway show better milestone progression? The bi-monthly updates provide the raw material for these insights.

Evidence-Based Impact Measurement

Traditional accelerator reporting often relies on anecdotal success stories or snapshot metrics. impact OS builds a comprehensive evidence trail by connecting:

  • Sprint commitments set by companies
  • Meeting interactions and coach observations
  • Market milestone progression
  • Quantitative metrics from bi-monthly updates
  • Support engagement outcomes tied to specific OKRs

This creates an auditable chain from support delivery to measurable outcomes. When a company reaches a new milestone, we can trace back through the observations, meetings, and support engagements that contributed to that progress.

Quality Through Peer Review

The head coach role provides a critical quality assurance layer. Rather than relying solely on individual coach judgment, the system supports a structured QA workflow. AI-powered analysis tools help head coaches review quarterly progress across their entire portfolio, identifying:

  • Companies making exceptional progress (and the interventions that helped)
  • Teams showing warning signs or stagnation
  • Coaches who might need additional support or training
  • Patterns in what types of support correlate with positive outcomes

This creates organizational learning. Insights from successful interventions can be documented and shared, while struggling companies can be identified early for additional support.

The System of Record Philosophy

impact OS doesn't try to be a CRM, a project management tool, a communication platform, and a reporting system all in one. It focuses on being the definitive system of record for:

  1. Who is in your portfolio (companies, founders, team members)
  2. What support you're providing (interactions, engagements, observations)
  3. Where they are in their journey (milestones, metrics, progress)
  4. Why your support matters (correlations between interventions and outcomes)

By maintaining this clear focus, impact OS integrates with (rather than replaces) the tools teams already use. Meeting notes come from Fireflies. Communication happens in Slack or email. But the structured record of what happened and what it means lives in impact OS.

This architectural philosophy means organizations adopting impact OS aren't forced into wholesale workflow changes. Instead, they gain a layer of intelligence and structure on top of their existing practices, capturing institutional knowledge that would otherwise live only in individual coach's memories or scattered documents.

* * *

2. Core Entities & Relationships

impact OS is built around a set of interconnected entities that mirror the real-world structure of accelerator operations. Understanding these core building blocks and how they relate to each other is essential for grasping how the system supports the complete founder journey.

The Company: Center of Gravity

Companies are the central entity in impact OS. Each company represents a startup in your portfolio, whether they're in active residency, alumni, or applicants. The system maintains:

  • Business Profile: Name, description, industry classification, location, founding date
  • Current Status: Operating status (active, graduated, inactive), residency stage, market milestone achieved
  • Performance Snapshot: Latest metrics from bi-monthly updates (revenue, customers, runway, team size)
  • Journey History: Complete timeline of observations, interactions, and progress

The system can also track other organizations in your ecosystem (service providers, partners, potential investors) while maintaining focus on your core portfolio companies.

People: The Human Network

Contacts represent every individual in your ecosystem: founders, team members, coaches, advisors, investors, and mentors. The system recognizes that individuals often wear multiple hats and their relationships evolve over time:

  • Personal Profile: Name, email, phone, LinkedIn profile, bio, photo
  • Company Relationships: Individuals can connect to multiple companies with different roles (founder, employee, advisor, investor)
  • Interaction History: Every meeting, conversation, and touchpoint
  • Observation Trail: Insights captured about their progress, challenges, and growth

People and companies have flexible relationships. A founder might start one company, pivot, and launch another. An advisor might support multiple portfolio companies. The system captures this real-world complexity naturally.

Coaches are designated to provide ongoing support and are assigned to companies, creating the primary coaching connection that drives the residency experience.

Programs & Cohorts: Organizational Structure

While impact OS is designed for continuous intake, it accommodates structured programs through two levels:

Programs represent your high-level offerings, perhaps a "Residency Program," a "Scale Program," or industry-specific tracks. Programs define:

  • Eligibility criteria
  • Duration and structure
  • Associated resources and support models
  • Update schedules (e.g., bi-monthly check-ins)

Cohorts group companies within programs, either for true cohort-based programming or for administrative organization (e.g., "Q1 2024 Intake"). This flexibility means you can operate continuous intake while still organizing companies into logical groups for reporting, events, or specific initiatives.

Interactions: Capturing Engagement

Interactions record every meaningful touchpoint between coaches and companies:

  • Meeting Type: One-on-one coaching sessions, group workshops, check-ins, advisor consultations
  • Participants: Which companies and people attended
  • Date & Duration: When the interaction occurred and how long it lasted
  • Integration Points: Connection to Fireflies transcripts for automatic capture
  • Observation Generation: Source material for AI-generated insights

Interactions create the paper trail of engagement. They answer: How often are we meeting with this company? Who's attending? What types of support are we providing? This quantitative layer sits beneath the qualitative insights captured as observations.

Observations: Qualitative Insights Through Atomic Research

Observations are the heart of impact measurement in impact OS. The system applies the principles of atomic research, a methodology that breaks down insights into their smallest, reusable components called "research nuggets." Rather than burying insights in lengthy reports that get filed away, each observation is a standalone, evidence-backed unit of knowledge that can be searched, filtered, and recombined to reveal patterns across your entire portfolio.

The Atomic Structure of an Observation:

Each observation follows a three-part atomic nugget structure:

  1. Observation (Name + Description): What was discovered, a specific, actionable insight about company progress
  2. Evidence: The supporting proof: direct quotes from meetings, data points from updates, or specific examples
  3. Tags: Multiple layers of categorization for searchability and pattern recognition

Categorization Layers:

  • Type/Dimension: Team, Traction, or Technology (the three core dimensions of founder journey)
  • Topics: Thematic tags like "fundraising," "product-market fit," "team dynamics," "customer acquisition"
  • Date: When the observation was made or evidence gathered
  • Source: Link back to the interaction, company update, or external event that generated the insight

How Observations Are Created:

  1. AI-Generated: Automatically extracted from meeting transcripts or company update submissions
  2. Coach-Authored: Manually recorded insights from conversations or observations
  3. System-Generated: Triggered by milestone achievements or metric thresholds

The Power of Atomic Observations

This atomic approach transforms how impact is measured and understood:

  • Pattern Recognition: Tags enable you to see trends across multiple companies. Which challenges appear most frequently before companies hit their first paying customer? What team dynamics correlate with successful traction milestones?
  • Searchability: Instead of asking "Which report mentioned that insight about pricing?", you can search observations by topic, company, date range, or dimension to instantly surface relevant insights.
  • Context Preservation: While each observation stands alone, source links maintain the full context. You can trace from a single insight back to the complete meeting transcript or update submission.
  • Reduced Waste: Tangential insights that don't fit the current conversation aren't lost. An observation about co-founder dynamics from a traction-focused meeting remains accessible when reviewing team challenges months later.
  • Organizational Learning: Aggregating observations across your portfolio reveals which interventions work, which challenges are universal, and which support patterns correlate with success.

When a company hits a revenue milestone, the metrics show the number. Observations capture why it happened, what challenges they overcame, what support made the difference, and how their experience connects to patterns across your entire portfolio.

Support Engagements: Targeted Interventions

Beyond regular coaching, companies often need focused support on specific challenges. Support Requests formalize this:

  • Problem Definition: Clear articulation of what the company needs help with
  • Support Team: Assignment of coach, subject matter experts, or advisors
  • OKRs & Commitments: Specific, measurable objectives for the engagement
  • Progress Updates: Ongoing commentary and status tracking
  • Outcomes: Observations and results linked to the support provided

This creates accountability and traceability. When you bring in a specialized advisor to help with go-to-market strategy, the support engagement tracks: What was the original challenge? What did we commit to? What actually happened? What was the impact?

Commitments: Learning to Be Less Wrong Over Time

The system tracks two types of commitments, but they serve a deeper purpose than simple accountability. They create an objective framework for measuring team behavior and helping founders systematically improve their decision-making.

Residency Commitments: Leading and Lagging Indicators

Sprint-based commitments operate on the principle of leading versus lagging indicators:

Leading Indicators are the specific, controllable activities that companies commit to:

  • "+5 sales discovery calls"
  • "+5 demo calls"
  • "+5 customer interviews"
  • "Ship feature X to beta users"

These are behaviors the team can directly control and objectively measure. Did they complete 5 discovery calls or not? This binary measurement provides clear insight into team engagement and execution capability.

Lagging Indicators are the outcomes you hope those activities will influence:

  • "+5 paying customers"
  • "20% month-over-month revenue growth"
  • "Achieve product-market fit score of 40+"
  • "Reduce churn by 15%"

These are results that emerge from the leading activities, but aren't directly controllable day-to-day.

The Philosophy: Being Less Wrong Over Time

The goal is not to "get it right" on any given sprint. The goal is to systematically improve the founder's ability to make effective bets. Here's how it works:

  1. Make a Hypothesis: The team commits to specific activities (leading indicators) based on their hypothesis about what will move their metrics (lagging indicators).
  2. Execute and Measure: Over a 2-4 week sprint, the team executes those activities. The system objectively tracks: Did they do what they committed to?
  3. Evaluate the Bet: At sprint review, compare what actually happened to the lagging indicators. Did those 5 discovery calls lead to 2 paying customers as hoped? Or zero? Or 10?
  4. Adjust and Improve: The team's next sprint commitments reflect what they learned. They're not failing when the bet doesn't pay off. They're learning to make better bets.

Over multiple sprints, this creates a data-driven narrative of founder growth. Are they getting better at predicting which activities will move their metrics? Are they learning to set more realistic targets? Are they identifying leverage points in their business model? And, most importantly, are they able to monetize the value they are delivering to the market in an increasingly predictable way?

Support Commitments: Objectives specific to a support engagement, tied to the OKRs defined when targeted support is initiated. These follow the same leading/lagging framework but are scoped to a specific challenge or opportunity.

Why This Matters for Impact Measurement

When leveraged, this approach has the potential to change how your organization measures impact:

  • Objective Behavior Tracking: You're not relying on subjective assessments of "how hard they're working." You have binary data: they completed their commitments or they didn't.
  • Learning Velocity: The real measure of coaching effectiveness isn't whether companies hit their goals. It's whether they're improving their ability to make strategic bets and learn from results.
  • Pattern Recognition: Across your portfolio, you can identify which types of activities most reliably lead to which outcomes, creating institutional knowledge about what works.
  • Coaching Conversations: Sprint reviews become evidence-based discussions: "You committed to X, achieved it, but didn't see movement on Y. What does that tell us about your business model?"

The commitment model doesn't just track what companies are doing. It creates a structured framework for accelerating founder learning and measuring that acceleration over time.

How It All Connects

The power of impact OS comes from how these entities interconnect:

  1. Companies enter the Program and are optionally grouped into Cohorts
  2. People (founders, team members) connect to Companies with designated roles
  3. Coaches are assigned to Companies, creating the primary support relationship
  4. Interactions bring together Companies and People (including coaches) to record touchpoints
  5. Observations are generated from Interactions or Company Updates, categorized by dimension (Team/Traction/Technology)
  6. Support Engagements provide opportunities for skill development through working with subject matter expert advisors measured by OKRs
  7. Commitments create forward-looking goals that are validated through subsequent Observations and Updates

This interconnected model means you can answer complex questions:

  • Which companies are making progress on team building but struggling with market traction?
  • What interactions and observations preceded a company's breakthrough milestone?
  • How much advisor support has each company received, and from whom?
  • Which coaches are capturing the most observations, indicating deep engagement?
  • What topics appear most frequently in observations for companies that successfully reach $1M ARR?

This doesn't just store data. It creates a knowledge graph of your organization's impact, enabling pattern recognition and organizational learning that would be impossible with disconnected spreadsheets or document folders.

* * *

3. The Impact Cycle: Plan, Execute, Measure, Report

impact OS doesn't impose a single workflow. It orchestrates multiple rhythms happening simultaneously. Daily conversations generate observations. Sprints create learning cycles. Bi-monthly updates capture trends. Monthly reviews ensure quality. Quarterly evaluations inform decisions. These rhythms interlock like gears, each operating at its natural frequency while keeping the entire system aligned around founder progress and organizational learning.

Planning: Multiple Time Horizons, One System

Define Sprint Cycles

This is the fastest-moving gear. Coaches and founders sit together regularly to review what happened and set what's next. The system holds the context from previous sprints, making each conversation build on the last. What activities did we bet on? Did they move the metrics? What does that teach us about where to focus next? The residency commitment becomes the artifact of this conversation: a hypothesis recorded with a target date, waiting to be tested.

The Event-Driven Trigger: Targeted Support

Some planning happens when a specific need emerges. A company hits a wall with their go-to-market approach. The pricing model isn't working. Co-founder tension is affecting execution. These moments trigger support engagement planning, a more formal process where the coach identifies the right expert, defines clear objectives, and establishes a timeline. Unlike regular sprints, these have defined beginnings and endings, with success criteria that determine when the engagement closes.

The Bi-Monthly Pulse: Reflection Points

Every two months, an automated reminder prompts companies to step back and update their metrics. This isn't just data entry. It's a forcing function for reflection. The form is pre-loaded with their previous data, so they only update what's changed. Submitting the update refreshes the KPI dashboard, generates new observations from the qualitative responses, and flags companies that might need attention. This regular heartbeat creates longitudinal data while preventing "update fatigue" from too-frequent requests.

The Portfolio View: Accountability and Allocation

At Volta, we've established a clear performance benchmark: $0 to $1M in recurring revenue in 18 months or less, with a target 20% month-over-month growth rate. Head coaches use the portfolio view to hold coaches accountable to driving toward this goal. Who's tracking toward the growth rate? Who's falling behind? But the system also surfaces nuance. Is a team that's missing the numeric target showing significant improvement in their learning rate? Are they highly coachable and getting demonstrably better at making bets? The head coach balances the quantitative performance data with the qualitative observations to make informed exceptions, ensuring coaches focus resources on teams with genuine trajectory even if current metrics don't tell the full story.

Executing: Where the Real Work Happens

Execution is where coaches earn their value: having difficult conversations, asking probing questions, connecting dots founders can't see themselves. The system's job during execution is simple: stay out of the way, then capture what happened without anyone needing to stop and write it down.

The Daily Flow: Conversations Over Documentation

A coach has three meetings today. In the old model, that meant three hours of meetings plus two hours of note-taking, observation recording, and CRM updates. impact OS inverts this. The coach invites Fireflies to each meeting, then focuses entirely on the founders. The transcript flows into the system. AI reads it, extracts key moments, generates observations, categorizes them by dimension. The coach is still accountable for reviewing the AI-generated content for quality and accuracy, ensuring action items are properly identified, but they're reviewing and refining, not creating from scratch. What would have been 30 to 60 minutes of documentation per meeting becomes 5 to 10 minutes of review and refinement. That time saved goes back into meeting with more companies or thinking strategically about the ones they're supporting.

Focused Interventions: Time-Boxed Expertise

When a specialist comes in, a pricing expert, a sales coach, a technical architect, the engagement has structure. There's a start, a timeline, defined objectives. These sessions are logged as they happen, with observations tagged directly to the engagement. The system tracks conversation threads throughout the engagement. When completed, surveys are issued from the advisor on the founders and vice versa, the founders reviewing the advisors. This creates a holistic reference point for the support engagement and provides additional context for the startup's larger timeline. The advisor doesn't need to know the whole company history; the system surfaces relevant context. When the engagement completes, there's a clear record: problem identified, support delivered, outcomes achieved (or lessons learned). This makes specialists more effective because they're not starting from zero each time.

The Bi-Monthly Submission: Capturing What Changed

Founders receive a reminder with a link to the update form. When they open it, the form is pre-loaded with their previous responses,revenue numbers, team size, runway, all the data they submitted last time. Their job is simple: update only what changed. Revenue went from $10K to $15K? Change that number. Team size stayed the same? Leave it. Lost a key team member? Update that field. This delta-based approach means founders spend 5-10 minutes updating what's different, not 20 minutes re-entering everything from scratch.

If a founder doesn't respond to the reminder, the system treats it as "no change", continuity is maintained with their previous data. This ensures longitudinal tracking never breaks, while minimizing founder burden. The form itself is customizable per organization, allowing each support organization to ask the questions that matter for their model, but the underlying principle remains: capture the delta efficiently, keep the data flowing.

Measuring: The System That Watches Itself

Measurement doesn't happen on report-writing day. It happens every day, in the background, as work gets done. The system is always watching, always aggregating, always building the evidence base that will later become insights.

Continuous Observation Flow

Every interaction automatically generates observations. Every bi-monthly update creates new data points. Every milestone achievement triggers a record. These AI-generated observations are tagged and categorized, building a longitudinal dataset about each company's journey across the three dimensions.

Alongside the automated observations, coaches add their own notes and track emails that capture their perspective,the energy shift in a meeting, the body language that suggests co-founder tension, the confidence that wasn't there three months ago. These coach notes become part of the company's profile, providing the qualitative context and human judgment that complements the structured observations. It's the combination,automated extraction plus coach perspective,that creates the complete picture.

The Living Timeline

Open any company's profile, and you see their timeline,observations flowing in chronologically, color-coded by dimension (Team, Traction, Technology). Filter to see just one dimension or search across all of them. The visual clustering reveals patterns: lots of traction observations in Q2, heavy team activity in Q3, sparse technology observations suggesting a gap. The timeline doesn't show everything,sprint commitments and milestones live elsewhere,but it shows the accumulation of insights about what's happening with this company and where coaches are focusing attention.

The Learning Measurement

This is where the "less wrong over time" framework becomes visible. The system tracks not just what commitments were made, but what happened to the lagging indicators afterward. Sprint by sprint, the data accumulates:

  • Sprint 1: Committed to 5 discovery calls, completed 3, saw 0 customers added
  • Sprint 2: Committed to 10 demos, completed 8, saw 1 customer
  • Sprint 3: Committed to targeted outreach to specific persona, completed fully, saw 3 customers

The pattern emerges. The team is learning. Their hypothesis-testing is improving. That's measurable growth, captured automatically.

From Individual to Portfolio

While each company has its own timeline, the system aggregates across all of them. Which challenges show up repeatedly this quarter? What's the current distribution across market milestones? Which topics are trending in observations? These portfolio patterns inform where the program should focus energy,maybe there's a common sales challenge that warrants a workshop, or maybe companies hitting their first customer are consistently struggling with pricing six months later. The aggregate view reveals what individual timelines can't.

Reporting: From Data to Decisions

All that continuous measurement exists for one reason: to inform decisions. Different people need different views at different times. The system provides each stakeholder what they need, when they need it, without anyone having to compile reports manually.

The Monthly Pulse: AI-Assisted Analysis

Every month, custom AI agents analyze the accumulated data,both qualitative observations and quantitative metrics,to surface key trends and patterns. These agents help coaches perform analysis that would take hours to do manually, distilling it into a review that takes minutes. Which coaches are deeply engaged? Which companies have gone quiet? Are there patterns suggesting someone needs intervention? The agents also aggregate information for funding stakeholders into simple, easy-to-read reports. What would have been hours of manual report compilation becomes a quick review of AI-synthesized insights, freeing coaches to act on findings rather than spend time gathering them.

The Quarterly Decision: AI-Assisted Evaluation

When it's time to evaluate whether a company continues in the program, custom AI agents analyze the complete picture,progress across the three dimensions, commitment execution patterns, learning trajectory, observation trends. These agents synthesize months of data into coherent narratives that would take hours for a coach to compile manually. The progress timeline visualizes the journey. The commitment tracking shows execution consistency. The observations provide qualitative context. AI agents distill all of this into evaluation reports that coaches can review in minutes, not hours. But the final judgment remains human. Has this company made sufficient progress? Are they executing and learning fast enough? The AI-assisted analysis informs, but experienced judgment decides.

The Stakeholder View: Reporting at the Right Level of Abstraction

Funding stakeholders need to understand how their investment is driving specific outcomes, but they don't need to see every coaching session or individual company detail. Because impact OS captures foundational data,observations, commitments, metrics, interactions,at the most granular level, it can aggregate and report at whatever level of abstraction each stakeholder needs. Portfolio-level revenue growth. Job creation across all companies. Milestone progression showing the funnel from validation to scale. Success stories backed by actual observation evidence, not anecdotes. The foundational data makes this possible: coaches work at the individual company level, the system aggregates to portfolio level, and stakeholders get proof of impact without drowning in operational details.

The Learning Loop: Being Less Wrong About What Works

The same "less wrong over time" principle that guides how we coach companies applies to how we improve the program itself. We make hypotheses about what support works: Does bringing in a pricing specialist at the right moment accelerate traction milestones? Do companies with certain team dynamics progress faster? Which challenges consistently appear before companies hit specific milestones, and can we proactively address them?

The foundational data lets us test these hypotheses across the portfolio, not just with individual companies. We see patterns that would be invisible in isolated coaching relationships. When we adjust our model,allocate resources differently, structure support in a new way, emphasize certain topics earlier,we can measure whether it actually correlates with better outcomes. We're not just helping companies learn faster; we're learning faster ourselves about what accelerates company growth. The system that captures their journey captures ours too.

The Interlocking Gears

Sprint cycles run weekly. Updates flow bi-monthly. Monthly QA catches issues. Quarterly evaluations make decisions. Annual reviews assess program effectiveness. None of these rhythms exist in isolation,they're gears in a machine designed to keep companies moving forward while building institutional knowledge. Daily conversations feed weekly sprints. Bi-monthly updates inform quarterly evaluations. Monthly patterns shape annual strategy.

The system doesn't impose one rhythm,it orchestrates many, each happening at its natural frequency, all working together to ensure that the most valuable thing,human connection between coaches and founders,happens as much as possible, while the evidence of that connection's impact accumulates automatically.

* * *

4. Adopting impact OS: Key Considerations

This represents a significant shift in how support organizations operate. The software is an enabler, but the critical work is ensuring team alignment to tackle the changes needed for successful implementation. It's not a software project, it's an ongoing commitment to understanding and improving your impact. The following sections outline key considerations we've learned from our experience.

The Prerequisites: What You Need Before You Start

Dissatisfaction as the Driver for Change

The most critical prerequisite isn't technical,it's organizational urgency. Leadership must be genuinely dissatisfied with their current ability to report on impact. This isn't about judgment of quality; many excellent programs operate without this level of measurement. But adopting this approach requires significant change management, and that change won't stick without real motivation.

If your organization is comfortable with current impact reporting methods, this isn't the right time. If leadership sees understanding impact as a high priority but current methods fall far short, you have the fuel needed to drive adoption.

The Right Coaches: Impact-Oriented, Not Process-Followers

This system is predicated on having individuals, on staff or contract, who have done what founders are trying to do. They understand the nuances of bringing a novel product to market because they've lived it. The approach has been designed with this execution strategy in mind.

Critically, successful adoption requires coaches who want high accountability and genuinely care whether their support makes a difference. In our experience, trying to enforce process compliance on coaches who don't already value impact measurement produces poor results. The right coaches already track their impact informally,this system just makes it systematic and scalable.

Look for coaches who ask questions like: "Did that intervention actually help?" "What patterns am I seeing across my companies?" "How do I know my advice is working?" These are the people who will thrive with this approach.

Cultural Commitment to Continuous Improvement

This is more work than simply running a program, but it's a different kind of work. The shift moves from logistics and delivery-output focus to strategic and intentional effort with high levels of accountability. Organizations that succeed have:

  • Cultural acceptance of continuous improvement: The willingness to question "is this working?" and adjust based on evidence
  • Transparency as a core value: With coaches about why you're recording, with founders about how you use their data, with funders about what the data shows
  • Long-term thinking: Understanding that building this capability is an evolution, not a one-time implementation

Without these cultural fundamentals, organizations will struggle. The tools have the potential to improve your awareness of impact but the tools alone won't be sufficient.

Technical Capacity (Or Willingness to Build It)

The technical landscape is changing daily, making this more accessible than ever. That said, it helps to have team members who understand how AI and automation work,mainly because systems like this continue to evolve based on the needs of teams you're supporting.

If you don't have this capacity today, view adoption as an opportunity to develop new skills within your team. What we've outlined in this document wasn't possible 12 months ago. Even if today isn't the right time, organizations should start planning for 3-6 months out, or risk losing relevance as these capabilities become table stakes for startup founders choosing accelerators.

Building Trust: Overcoming Resistance

The Recording Conversation

When we first introduced Fireflies recording to our sessions, there was natural apprehension. Two things made the difference:

  1. Clear articulation of "why": From day one, we explained that recording wasn't about surveillance. It was about freeing coaches to focus on founders rather than note-taking. It was about optimizing our resources to impact founders most effectively.
  2. Data stewardship, not ownership: We made it clear that we treat the data as the founders' data, of which we are stewards. Privacy and security is paramount. This transparency in conversations about data handling addressed the legitimate concerns founders had.

The resistance largely evaporated once coaches and founders experienced the benefit: better conversations, less administrative burden, more actionable insights.

The Cultural Shift Challenge

The harder challenge is internal. Adopting this approach culturally within your organization requires moving from logistics-focused delivery to strategic, accountable impact work. This means:

  • Coaches spending less time on coordination, more on analysis and strategic thinking
  • Leadership asking different questions: not "how many hours of coaching?" but "what patterns predict success?"
  • Team members accepting that some of their work will be evaluated objectively, with data

Organizations with a performance culture that values accountability will find this energizing. Organizations where process compliance matters more than outcomes will struggle.

The Implementation Path: Starting Simple, Building Systematically

Phase 0: Alignment (Before Technical Work Begins)

Before adopting any tools, get crystal clear on:

  1. The three-audience question framework: What do you need to answer for founders? For your internal team? For funding stakeholders? Write these down. Make them specific.
  2. Your success criteria: What does "working" look like at 90 days? 6 months? 1 year? For us, it's $0-$1M ARR in 18 months with 20% MoM growth,but your criteria might differ. The key is defining it explicitly.
  3. Cultural readiness: Have honest conversations with your team. Are coaches excited about this or resistant? Is leadership willing to invest in building this capability over time?

Phase 1: The Foundation (Months 1-3)

Start with the building blocks:

  1. Adopt Fireflies or equivalent: This is the foundational data source. At ~$30/month/seat, it's accessible and has well-documented APIs. Every coaching interaction starts getting recorded and transcribed. This alone will change how coaches work,they'll focus on conversation quality rather than note-taking.
  2. Implement a system of record for founder data: You need a place to collect both qualitative and quantitative data from founders. The bi-monthly update structure we described works well, but adapt the questions to your context and the three audiences you defined in Phase 0.
  3. Start capturing interactions manually if needed: Even if you're not automating observation generation yet, recording who met with whom, when, and about what creates the foundation for later pattern analysis.

Success indicator at 90 days: Coaches are consistently using Fireflies. Founders are submitting bi-monthly updates with >70% response rate. You have 3 months of interaction data accumulated.

Phase 2: Intelligence Layer (Months 4-9)

With foundational data flowing, add intelligence:

  1. Automate observation generation: Use AI to extract insights from transcripts and updates. Start simple,even basic categorization by dimension (Team/Traction/Technology) adds value.
  2. Build your first analysis agents: Create custom agents to answer the specific questions you defined in Phase 0. Don't build dashboards yet, build agents that can query the data and synthesize answers.
  3. Pilot the commitment tracking: With 2-3 coaches, implement the leading/lagging indicator sprint framework. Learn what works before rolling out portfolio-wide.

Success indicator at 6 months: Coaches are spending 50% less time on documentation. You can answer at least 3 of your key questions using accumulated data. One or two coaches are successfully using commitment tracking to show founder learning velocity.

Phase 3: Portfolio Intelligence (Months 10-18)

Now you can start seeing patterns:

  1. Expand commitment tracking: Roll out to all coaches who are ready (remember: only those who value accountability will succeed with this).
  2. Build portfolio-level analysis: With 12+ months of data, you can start seeing what correlates with success. Which interventions work? Which challenges predict stalling? What's the typical journey from milestone to milestone?
  3. Implement stakeholder reporting: Generate the reports your funding stakeholders need, drawing from the rich foundation of data you've built.

Success indicator at 12-18 months: You can confidently answer your Phase 0 questions. Coaches are making better decisions based on data. You can demonstrate program impact with evidence, not anecdotes. You're starting to be "less wrong" about what works as a support organization.

Questions You'll Be Able to Answer

For Founders: Helping Them Be Objectively Less Wrong

The core question is: "Is your theory about market value objectively resonating, or is the data showing you're more wrong than right?"

Founders have natural conviction bias,they must believe in their unlikely success to persist through challenges. This makes it easy for them to discount contradictory signals. Your role is to help them evaluate evidence as objectively as possible.

Success looks like: Sales and growth trajectories accelerating because their theory aligns with unmet market needs. You helped them see when to pivot, when to persist, and which aspects of their approach to adjust.

The system should help you show founders: "You committed to X activities. You executed them. Here's what moved and what didn't. What does that tell us about your theory?"

For Internal Staff: Understanding Support Efficacy

Key questions to answer:

  • What specific support increases the odds of teams accelerating their learning? Which interventions correlate with faster milestone progression? Which support engagements show the strongest impact?
  • Are there trends in team profiles that predict coachability? Can you identify early signals that a team will leverage coaching effectively versus struggle to implement advice?
  • Are teams objectively becoming less wrong over time? Can you measure learning velocity to decide whether continued investment makes sense?

For VOLTA, this crystallized into: $0-$1M ARR in 18 months with 20% MoM growth. Your criteria might differ. The key is having clarity so you can confidently decide which teams to invest in and, equally important, which teams to gracefully exit from your program.

Quality over quantity requires clear criteria and the courage to act on data.

For Funding Stakeholders: Evolving Economic Development Metrics

Funding stakeholders need to understand: "How is the investment we've made driving specific outcomes?"

Historically, economic development focused on jobs created and capital raised as lagging indicators of success. AI is flipping this profile,successful companies now require less capital and fewer people to achieve the same revenue outcomes.

The questions you need to answer:

  • How are success metrics changing with technology? Help government and funders understand that traditional measures may no longer capture value creation accurately.
  • What's the correlation between support delivery and company outcomes? Show the causal chain from intervention to result, not just aggregate statistics.
  • How does this portfolio compare to benchmarks? Provide context that helps stakeholders evaluate performance relative to industry standards that are themselves rapidly evolving.

Most importantly: Bring objective data that helps funders ask better questions and make better decisions. Your role isn't just reporting,it's helping evolve what performance measurement looks like in a rapidly changing landscape.

Common Pitfalls and How to Avoid Them

Pitfall 1: Building Everything at Once

We've built countless features that turned out to be non-valuable. The temptation is to imagine the complete system and try to build it all before going live.

How to avoid: Start with the questions you need to answer, build only what's required to answer them, then iterate based on actual usage. The system is only as good as the data put into it, focus on getting high-quality foundational data before adding sophistication.

Pitfall 2: Treating This Like a CRM Implementation

This isn't a software project with a defined scope, timeline, and go-live date. It's a commitment to continuous improvement of your understanding.

How to avoid: Frame adoption as building organizational capability over time, not implementing a system. Invest in team members learning to use these tools, not just in the tools themselves.

Pitfall 3: Insufficient Coach Buy-In

If coaches see this as surveillance or administrative burden rather than a tool that makes them more effective, it will fail.

How to avoid: Assess cultural alignment before starting. Do you have coaches who thrive on accountability and want to see their impact improve? If not, either develop that culture first or reconsider adoption. Forced process compliance produces poor data and resentful teams.

Pitfall 4: Process Over Outcome

Getting so focused on "using the system correctly" that you lose sight of why you're using it: to understand and improve impact.

How to avoid: Regularly revisit the questions you're trying to answer. If a process isn't helping you answer those questions better, change or eliminate it. The methodology should serve the mission, not become the mission.

Pitfall 5: Dashboards Before Data

Building beautiful visualizations before you have foundational data flowing consistently.

How to avoid: Spend your first 6 months ensuring data quality and completeness. Only then build analysis layers. A simple agent that can query good data is more valuable than a sophisticated dashboard built on inconsistent data.

The Long Game: Evolving Your Own Understanding

At the heart of what makes this approach successful is the same principle we apply to companies: being less wrong over time.

You won't get the implementation right on the first try. You'll build features you don't use. You'll define metrics that turn out not to matter. You'll discover that patterns you thought were causal were merely correlational.

That's not failure, that's learning. The commitment required is not just to build a system, but to continuously evolve your own understanding of what accelerates company growth.

With the right cultural foundation (dissatisfaction with current methods, coaches who value impact, commitment to transparency and continuous improvement) you have the potential to succeed. Without it, the effort required is unlikely to produce returns.

The choice to adopt this approach should be made with full awareness: it's a long-term evolution of how your organization operates, not a project with a neat beginning and end. If that resonates with where your organization is headed, the insights this enables can fundamentally transform your impact on the founders you serve.

* * *

A Call to Collaboration

At VOLTA, we know we're only scratching the surface of what's possible. The methods and tools outlined in this document represent our current understanding, but AI is evolving daily, and with it, what's possible for measuring and improving support organization impact.

We believe these capabilities are becoming critical for our organizations to remain relevant, both in the eyes of ambitious founders choosing where to build their companies, and funders deciding where to allocate resources for economic development.

This is a call to action for leaders of incubators and accelerators who see AI as an enabler of understanding impact and want to evolve their organizations accordingly. Whether you're considering adopting these approaches, have built similar systems, or are exploring different methods entirely, we welcome collaboration.

The challenges we face (demonstrating impact, helping founders learn faster, evolving economic development metrics) are shared across our entire ecosystem. We're stronger when we learn together.

If this framework resonates with your organization's direction and you're interested in collaborating on the continuous evolution of these methods and tools, we'd welcome the conversation.

Matt Cooper
CEO @ Volta