FFI Standard for AI‑Native Companies
Financial infrastructure requirements for companies whose operations are fundamentally structured around AI inference, agent workflows, or AI‑leveraged service delivery.
An AI‑Native company is one whose operations are fundamentally structured around artificial intelligence inference, AI agent workflows, or AI‑leveraged service delivery, with a human team that is small relative to the company's revenue or output capacity. The defining financial characteristics are infrastructure costs dominated by computational resource consumption rather than headcount, non‑linear cost‑to‑scale relationships in which marginal cost per unit of output may decline materially at scale, and unit economics that are not adequately represented by prior company type frameworks.
Cost of goods sold for AI‑Native companies must separately identify computational inference costs as a distinct sub‑line within the cost of goods sold category (Book 1, Section 1.1). This separation is required because inference cost is the primary variable cost and the metric that must be tracked against usage volume to assess unit economics accurately.
Variable costs such as inference compute must be modeled as rates per unit of output, not as fixed monthly amounts. Modeling inference costs as a fixed cost obscures the relationship between usage growth and cost growth, which is the central financial risk of an AI‑Native business (Book 2, Section 2.5).
The unit must be defined explicitly: it may be the inference call, the output delivered, the active seat, or the usage volume metric. The definition must reflect the pricing model of the business and be applied consistently across all periods (Book 2, Section 2.2).
Lifetime value to customer acquisition cost ratio norms for AI‑Native companies are not yet established with sufficient market data to state as benchmarks in Beta v0.5. The company must calculate and track the ratio using the methodology defined in Book 2, Section 2.2, and document the assumptions underlying its inference cost projections (Book 2, Section 2.2).
Gross margin for an AI‑Native company at Growth Stage may be materially lower than the same company at scale if it has developed more efficient inference capability. The cost structure model must make this trajectory explicit, modeling the relationship between usage volume and marginal inference cost (Book 2, Section 2.5).
AI‑Native companies may issue token warrants, future token agreements, or equity instruments tied to usage milestones or model performance thresholds alongside conventional equity. Each such instrument must be reflected in the fully diluted cap table with its terms documented precisely, including conversion mechanisms, trigger conditions, and the range of possible dilutive outcomes (Book 3, Section 3.1).
Revenue and ARR multiples for AI‑Native companies are not yet established with sufficient consistency to state as benchmarks in Beta v0.5. The wide variance reflects market uncertainty about the long‑term unit economics of businesses whose cost structure is dominated by inference compute costs. The company must document the specific peer set characteristics and adjustments applied, with explicit acknowledgment of the limitations of available comparable data (Book 4, Section 4.3).
For decisions about model training investment or compute infrastructure scaling, the strategic decision model must include a compute cost projection showing the relationship between the investment and the expected improvement in model performance or cost efficiency, the timeline to that improvement, and the financial return at current and projected usage scale. A scenario in which the expected improvement is not achieved must be modeled (Book 6, Section 6.2).