← Home Notes from the Jagged Frontier
01
The Production Readiness Stack
Four layers, all necessary, none optional

Teams build AI systems top-down, starting with the use case and the application. Systems fail bottom-up, starting with infrastructure assumptions nobody stress-tested during the pilot. The Production Readiness Stack names each layer and makes the dependency explicit.

L4
Application & Industry Workflows
Where value is delivered. Where most teams start. Where most pilots live permanently.
L3
Data & Retrieval Layer
Context, lineage, freshness, trust fabric. The connective tissue between AI and the system of record.
L2
Governance & Trust Platform
Identity, authority, audit. Engineered in from the start — not bolted on after the first finding.
L1
Infrastructure
Reliability, latency, data residency. Non-negotiables that determine what can run at all.

The production readiness gap almost always opens between layers: a governance assumption the infrastructure cannot support, a retrieval design that skips identity enforcement, an application scoped without knowing where the execution boundary sits. Build all four before you ship any one.

See the full essay: The Production Readiness Gap →

02
The Execution Boundary
Where model output meets humans, rules, and systems of record

Every enterprise AI workflow has an execution boundary — the point where the model's output hits a human who must approve, a rule that enforces a constraint, or a system that owns identity, permissions, lineage, or SLAs. This boundary is not optional. It determines the architecture.

Event
Model
Rules / Constraints
System / Human
Action
New Event

AI can recommend, classify, draft, route, and summarize. It almost never completes an enterprise workflow end-to-end. Companies that go from pilot to production understand this early — they design around the execution boundary, not around the model.

Healthcare
Clinical review — AI flags, human owns the action. PHI never leaves the boundary.
Edge · Federated
Finance / Trading
Every AI-influenced decision must be explainable, reproducible, and auditable.
Deterministic core · HitL
Manufacturing
Edge latency and physical safety gate every actuator action. PLCs control all timing.
Edge · Safety rules
Enterprise ERP
SOX, identity, transactional consistency, approval workflows override all model output.
Governance fabric · HitL
03
Enterprise AI Operating Model
The system that turns AI capability into production outcomes

Models and tools are not the limiting factor in enterprise AI anymore. The constraint is the operating system built around them: capital allocation, platform infrastructure, production workflows, telemetry, adoption systems, and ecosystem leverage. Four components are required — and all four must be working simultaneously.

01 — Telemetry & Intelligence
Connect usage to workloads
Product usage data linked directly to customer workloads. Visibility into which services drive growth and which customers have high expansion probability. Without telemetry, you are guessing.
02 — Organizational Alignment
Shift incentives to adoption
Incentives aligned to adoption milestones, not bookings. Teams trained to architect workload chains, not just deploy first instances. The compensation model has to point at the same metric as the product roadmap.
03 — Repeatable Implementation
Reference architectures at scale
Standardized solution playbooks and reference architectures that let customers move from proof of concept to production without starting from scratch each time. Repeatability is what makes the economics work.
04 — Operational Cadence
Weekly workload growth reviews
Right metrics identified, executive alignment achieved, and a consistent review cadence focused entirely on workload growth. The cadence creates the discipline. The discipline creates the compounding.

See the full essay: From Consumption to Outcomes →

04
Capacity → Consumption Yield Model
Capital allocation against revenue conversion velocity

Enterprise AI infrastructure planning typically starts from the supply side: how much capacity do we need, where do we deploy it, how fast do we build? The right question is different: where will deployed capacity generate durable, compounding revenue yield within an economically rational time horizon?

Capital Yield Formula
Capital Yield = (Projected 5-Year Regional Recurring Revenue) ÷ (Deployed Infrastructure CapEx)

Five factors determine the yield of any infrastructure deployment — and all five must be assessed before capital is committed.

1
Revenue Conversion Velocity
Speed at which deployed capacity converts into recognized recurring revenue. Tracks the lag between infrastructure availability and production workload adoption.
2
Pipeline Maturity Index
Validated late-stage enterprise demand in the target region. Anchors deployment decisions to revenue-backed demand rather than speculative TAM.
3
Regulatory & Data Sovereignty Trajectory
Compliance lead times and architecture localization cost. Prevents capital from being trapped behind compliance bottlenecks.
4
Latency & Workload Fit
Whether regional demand requires high-performance AI accelerators or standard infrastructure. Protects gross margin by aligning infrastructure cost to workload economics.
5
Competitive Density & Ecosystem Readiness
Integrator presence, field enablement strength, competitor saturation. Determines whether the region can generate yield immediately or requires subsidy.
05
Constraint Augmentation Matrix
Match the architecture to the constraint — industry by industry

The right AI architecture is the one that survives your hardest constraint. Different industries hit different walls first. This matrix maps the primary constraint in each domain to the augmentation paths that actually solve for it.

Vertical Primary Constraint Edge AI Accelerators Neuromorphic Sparsity
Enterprise ERP / Supply ChainReal-time optimization, integration, trustHighHighEmergingMedium
Healthcare — DiagnosticsPrivacy, PHI, latencyHigh · PivotalMediumEmergingMedium
Healthcare — GenomicsData volume, compute complexityLowHighLowHigh
Finance / TradingUltra-low latency, determinism, auditabilityHigh · Co-locationHigh · FPGAsExperimentalMedium
Manufacturing / RoboticsMillisecond timing, safety, edgeHigh · MandatoryHighHigh · EmergingMedium
Recommendation SystemsScale, real-time personalizationMediumHigh · CustomLowHigh · Native
Telecom — 5G / 6GNetwork complexity, real-time QoSHigh · MECMediumHigh · EmergingMedium

See the full research: Beyond the Ceiling — Whitepaper →