# OpenOva Business Strategy > **Status:** Living Document | **Last Updated:** 2026-04-28 > > This document defines OpenOva's business positioning, product strategy, revenue model, competitive landscape, and go-to-market approach. --- ## Table of Contents 1. [Executive Summary](#1-executive-summary) 2. [Vision & Mission](#2-vision--mission) 3. [The Problem](#3-the-problem) 4. [Our Solution](#4-our-solution) 5. [Product Family](#5-product-family) 6. [Service Portfolio](#6-service-portfolio) 7. [Target Market](#7-target-market) 8. [Persona-Based Value Propositions](#8-persona-based-value-propositions) 9. [Competitive Landscape](#9-competitive-landscape) 10. [Business Model & Pricing](#10-business-model--pricing) 11. [Go-to-Market Strategy](#11-go-to-market-strategy) 12. [The OpenOva Expert Network](#12-the-openova-expert-network) 13. [Migration Program](#13-migration-program) 14. [ROI & Total Cost of Ownership](#14-roi--total-cost-of-ownership) 15. [Community & Ecosystem](#15-community--ecosystem) 16. [Growth Roadmap](#16-growth-roadmap) --- ## 1. Executive Summary We are in an AI gold rush. The companies that extract the most value are the ones with AI-native infrastructure — not AI bolted onto legacy platforms after the fact. OpenOva is an AI-native infrastructure platform. 56 open-source components on Kubernetes, every one designed to be AI-manageable. Our AI brain — Specter — has pre-built semantic knowledge of every CRD schema, integration dependency, failure mode, health check, upgrade path, and compliance mapping across the entire ecosystem. It doesn't dump logs into an LLM. It sends surgical, structured context. Faster, cheaper, more accurate than anything bolted on. Cloud-native is the foundation. AI-native is the differentiator. We serve organizations that want the economics and freedom of open source with the confidence of AI-powered operations and enterprise support. Our customers range from banks navigating PSD2 compliance to enterprises breaking free from proprietary platforms like OpenShift, VMware, and Datadog. **What makes us different:** - **AI-native, not AI-bolted.** Specter has pre-built semantic knowledge of the entire 56-component ecosystem — structured CRDs, unified telemetry, declarative GitOps. Token-efficient operations that are 10x faster and cheaper than competitors dumping raw context into LLMs. - **Turnkey ecosystem, not a single tool.** 56 curated open-source components tested and certified to work together. What takes 2-3 years to build internally, we deploy instantly. - **Consultancy and platform from day one.** We meet customers where they are — whether they need a guide, a platform, or both. - **Open source, genuinely.** Blueprints are free forever. We charge for support, managed services, and expertise — not for access to code. - **56 open-source disciplines, one relationship.** Our expert network spans PostgreSQL, Kafka, Cilium, Keycloak, AI/ML, and dozens more — all under one support contract. --- ## 2. Vision & Mission ### Vision Become the definitive AI-native infrastructure platform for the open-source cloud-native ecosystem — where any organization can deploy, operate, and evolve their infrastructure with AI agents that understand every component. ### Mission Give every organization AI-native infrastructure: curated open-source blueprints managed by AI agents with pre-built semantic knowledge, backed by world-class human expertise. ### Principles | Principle | What It Means | |-----------|---------------| | **Open source is non-negotiable** | Blueprints are free forever. We don't create lock-in through code. | | **Confidence, not complexity** | We sell peace of mind, not more tools to manage. | | **Journey partnership** | We walk with customers from first cluster to production-grade platform. | | **Convergence over components** | The value is in 56 components working together, not any single one. | | **AI-native, not AI-bolted** | Every component is designed to be AI-manageable. Specter is built in from day one, not added as a feature. | | **Token efficiency is economic advantage** | Structured CRDs + unified telemetry = surgical AI context. 10x fewer tokens than competitors dumping raw logs. | | **Authenticity** | No corporate buzzwords, no sugar coating. Open source ethos in everything we do. | --- ## 3. The Problem ### 3.1 The AI-Native Infrastructure Gap Most organizations face a fundamental gap between what they want and what they can achieve: | What They Want | What They Actually Get | |----------------|----------------------| | "We want Kubernetes" | A cluster nobody knows how to operate | | "We want open source" | 40+ CNCF projects with no integration story | | "We want to avoid vendor lock-in" | 3-year OpenShift contracts they can't escape | | "We want cloud-native" | Lift-and-shift VMs onto K8s | | "We want observability" | Prometheus with 500 unread alerts | | "We want security" | A compliance checkbox, not actual security posture | | "We want AI capabilities" | A proof of concept that never reaches production | | "We want AI-powered operations" | A chatbot bolted onto their monitoring dashboard | | "We want autonomous infrastructure" | Alert fatigue and manual runbooks | ### 3.2 The Three Root Causes **1. They don't know how to start.** The CNCF landscape has 1,000+ projects. Analysis paralysis is real. Every vendor claims to be the answer. Internal teams lack the experience to evaluate, select, and architect a coherent platform from these options. **2. They can't operate the integrated whole.** Even organizations that successfully pick technologies fail at making them work together. Security gaps between components. No tested disaster recovery. Upgrades that break integrations. Alert storms that nobody understands. The platform becomes a liability instead of an accelerator. **3. The waiting cost is killing them.** Every month spent building platform internally is a month competitors move ahead. Innovation is blocked by "the platform isn't ready yet." Engineering talent leaves for companies that already have cloud-native infrastructure. The opportunity cost of not experimenting compounds daily. ### 3.3 The Fear Factor Many enterprises never attempt open source because they are afraid of operating without an insurance policy. They are accustomed to vendor relationships - someone to call, someone accountable, someone with an SLA. Open source feels like DIY. It feels lonely. The technology may be superior, but the support model feels risky. This fear keeps organizations locked into proprietary platforms that cost more, deliver less, and create the very lock-in they wanted to avoid. ### 3.4 The AI Operations Gap Every infrastructure vendor now claims "AI-powered operations." But there is a fundamental difference between AI-bolted and AI-native: **AI-bolted (what competitors do):** - Take an existing platform with unstructured configs, scattered logs, and proprietary dashboards - Bolt an LLM integration on top - Dump massive amounts of raw context into prompts — logs, metrics, config files, error messages - Get slow, expensive, and often inaccurate results - Call it "AI-powered" **AI-native (what OpenOva does):** - Design every component to be AI-manageable from the start — structured CRDs, unified OTel telemetry, standardized health endpoints - Build Specter with pre-built semantic knowledge of every component's CRD schemas, integration dependencies, failure modes, and upgrade paths - Send surgical, structured context to LLMs — typed data, not raw text - Get fast, cheap, and accurate results - This is architecture, not a feature The gap is structural. You cannot retrofit AI-manageability onto a platform that wasn't designed for it. This is why OpenOva's approach produces fundamentally different results than bolting a chatbot onto your monitoring stack. ### 3.5 The AI Gold Rush and Infrastructure We are in an AI gold rush. Every enterprise wants AI capabilities — AI agents, autonomous operations, intelligent automation. But most are building AI on infrastructure that was never designed for it. The organizations that extract the most value from AI are the ones whose infrastructure is built to be AI-manageable. The rest will spend years retrofitting, patching, and apologizing for "AI-powered" solutions that underperform. This is the infrastructure layer of the AI gold rush. And it is wide open. --- ## 4. Our Solution ### 4.1 What OpenOva Is OpenOva is both a **consultancy** and a **productized platform**. The customer chooses what they need: ``` Need AI-native ops? → Specter manages your infrastructure with pre-built knowledge of all 56 components. Need a guide? → We consult. Assessment, architecture, AI modernization roadmap, enablement. Need a platform? → We deploy. 56 AI-manageable components, production-grade, instantly. Need both? → We do both. And we stay for Day-2 operations. Need specialists? → Our expert network. 56 OSS disciplines, one contract. Need freedom? → We migrate. From OpenShift, Oracle, Redis, Datadog — to open source. ``` ### 4.2 What OpenOva Is NOT - **Not a Kubernetes distribution.** We don't fork or rebrand Kubernetes. We curate, integrate, and support the upstream projects. - **Not a PaaS.** We don't abstract away Kubernetes. We make it operational. - **Not a consulting-only firm.** We have intellectual property: blueprints, products (Cortex, Fingate, Fabric, Relay), Specter's semantic knowledge models, and the Axon gateway. - **Not a tool vendor.** We don't sell a single product. We sell an integrated ecosystem with operational confidence. - **Not AI-cosmetic.** We don't bolt a chatbot onto a dashboard and call it AI. Our platform is designed from the ground up to be AI-manageable — structured CRDs, unified telemetry, declarative GitOps. ### 4.3 The Turnkey Value What takes organizations 2-3 years and millions of dollars to build internally, OpenOva delivers instantly. This is possible because we have already spent years building, testing, and hardening the converged ecosystem. The work is done. It is packaged as blueprints. The customer gets the result without the journey. | Capability | Traditional Build Time | OpenOva | |------------|----------------------|---------| | Production K8s with GitOps | 3-6 months | Instant | | Full observability (logs, metrics, traces) | 2-4 months | Instant | | Multi-region DR with failover | 6-12 months | Instant | | Zero-trust security posture | 4-8 months | Instant | | Internal developer platform | 6-12 months | Instant | | AI-native operations (SOC/NOC with semantic knowledge) | 12-24 months (if ever) | Instant | | Open Banking sandbox | 12-18 months | Instant | | Enterprise AI platform | 12-24 months | Instant | | **Total** | **2-3 years** | **Instant** | --- ## 5. Product Family OpenOva maintains a minimal, authentic product naming approach. Only genuinely distinct products receive a name. Everything else uses plain language. ### 5.1 Named Products > **Company vs. Platform:** "OpenOva" is the **company**. The **platform** OpenOva ships is called **Catalyst**. A deployed instance of Catalyst is called a **Sovereign**. See [`docs/GLOSSARY.md`](../docs/GLOSSARY.md). Older references to "OpenOva (the platform)" in this document refer to Catalyst. | Product | Description | |---------|-------------| | **OpenOva Cortex** | Enterprise AI Hub. LLM serving (vLLM), RAG pipelines (Milvus + Neo4j), AI safety (NeMo Guardrails), LLM observability (LangFuse), chat interfaces (LibreChat). Self-hosted AI infrastructure. | | **OpenOva Axon** | SaaS LLM Gateway. The neural link to Cortex. Provides managed AI inference for customers who don't want to invest in GPU infrastructure. Powers Specter agents by default. Routes to Claude, GPT-4, or self-hosted vLLM. | | **OpenOva Fingate** | Open Banking product. PSD2/FAPI-compliant fintech sandbox with Keycloak (FAPI authorization), metering (OpenMeter), and 6 custom banking services. Production-ready open banking in hours. | | **OpenOva Specter** | AI-powered SOC/NOC agents. Self-healing ecosystem that monitors, detects, correlates, and remediates issues autonomously. DevOps, DevSecOps, SRE, FinOps, and Compliance agents working 24/7. Core built-in capability - not an add-on. | | **OpenOva Catalyst** | The platform itself — the self-sufficient Kubernetes-native control plane that turns any cluster into a **Sovereign**. Composes 56 curated open-source components (security, observability, GitOps, service mesh, policy engine, supply chain security, DR, identity, secrets, event spine) plus the Catalyst control plane (console, marketplace, admin, projector, catalog, blueprint-controller, environment-controller). Provisioning to Day-2 lifecycle to in-cluster IDP — a single integrated platform. Every other OpenOva product runs **on** Catalyst as composite Blueprints. See [`docs/ARCHITECTURE.md`](../docs/ARCHITECTURE.md). | | **OpenOva Exodus** | Structured migration program from proprietary to open source. Like an airline modernizing its fleet - you keep flying while every component gets upgraded. Not lift-and-shift. True modernization with zero downtime. | | **OpenOva Fabric** | Data & Integration product. Event-driven data integration and lakehouse analytics built on Strimzi/Kafka, Flink, Temporal, Debezium, Iceberg, and ClickHouse. | | **OpenOva Relay** | Communication product. Enterprise communication platform with email (Stalwart), video/audio (LiveKit), chat (Matrix/Synapse), WebRTC (STUNner), and clientless remote-desktop access (Guacamole — RDP/VNC/SSH/kubectl-exec via browser, Keycloak SSO, full session recording for compliance). | ### 5.2 Architecture Relationship ``` CATALYST (the platform — runs on every Sovereign) │ ┌──────────┬───────┼───────┬──────────┐ │ │ │ │ │ Cortex Fingate Fabric Relay Specter (AI Hub) (Banking) (Data) (Comms) (AIOps) │ │ └──────────── Axon ─────────────────┘ (SaaS LLM Gateway) Each child is a composite Blueprint (bp-cortex, bp-fingate, bp-fabric, bp-relay, bp-specter) installed on Catalyst via the marketplace. ``` **Specter is built-in.** Every OpenOva deployment includes Specter agents. By default, Specter connects to Axon (SaaS) for AI inference. Customers who want full self-hosted capability deploy Cortex and point Specter at their own models. ### 5.3 Specter: The AI Brain Specter is not a bolted-on chatbot. It is the AI brain of the platform — built with pre-built semantic knowledge of the entire 56-component ecosystem. #### Architecture ``` CUSTOMER CLUSTER OPENOVA CLOUD ───────────────── ────────────── ┌─────────────────────┐ ┌──────────────────┐ │ Specter Agents │ │ Axon │ │ ├── DevOps │───── API ──────> │ ├── Claude API │ │ ├── DevSecOps │ │ ├── GPT-4 API │ │ ├── SRE │<─── Response ─── │ ├── vLLM (hosted)│ │ ├── FinOps │ │ └── Model Router │ │ ├── Compliance │ └──────────────────┘ │ └── AI Ops │ │ │ │ Semantic Layer │ OR: Customer deploys │ ├── CRD Knowledge │ Cortex (self-hosted) │ ├── Integration │ for air-gap / sovereign │ │ Graph │ │ └── Failure Models │ │ │ │ Telemetry Layer │ │ ├── Grafana Stack │ │ ├── OTel Collector │ │ └── Hubble/Cilium │ └─────────────────────┘ ``` #### Agent Types | Agent | Responsibility | |-------|---------------| | **DevOps** | Drift detection, resource optimization, scaling recommendations, deployment validation | | **DevSecOps** | CVE scanning, policy compliance, security posture assessment, vulnerability remediation | | **SRE** | Incident correlation, root cause analysis, auto-remediation, runbook execution | | **FinOps** | Cost anomaly detection, right-sizing, waste elimination, capacity forecasting | | **Compliance** | Continuous audit, evidence collection, report generation, regulatory mapping | | **AI Ops** | LLM inference monitoring, model drift detection, GPU utilization optimization, AI safety policy enforcement | #### The Semantic Knowledge Moat Specter's core advantage is pre-built semantic knowledge. It doesn't discover the ecosystem at runtime by parsing logs. It knows the ecosystem before it starts. | Knowledge Domain | What Specter Knows | How It Uses It | |-----------------|-------------------|----------------| | **CRD Schemas** | Every field, validation rule, and default across 56 component CRDs | Reads configuration as typed data, not raw text | | **Integration Graph** | Which components depend on which, data flow paths, failure blast radius | Traces root causes across component boundaries in seconds | | **Failure Modes** | Known failure patterns, root causes, and proven remediation steps per component | Matches symptoms to known failures before escalating to LLM | | **Health Checks** | What "healthy" means for each component, including edge cases and degraded states | Distinguishes "degraded but functional" from "about to fail" | | **Upgrade Paths** | Version compatibility matrix, breaking changes, required migration steps | Plans safe upgrade sequences across interdependent components | | **Compliance Mappings** | Which controls map to PSD2, DORA, NIS2, SOX requirements | Generates audit evidence automatically, flags compliance drift | #### Token Efficiency: The Economic Advantage This is the architectural moat. Competitors bolt AI onto unstructured platforms. The result: | Approach | Context Sent to LLM | Speed | Cost | Accuracy | |----------|---------------------|-------|------|----------| | **Competitors (AI-bolted)** | Raw logs, unstructured configs, dashboard screenshots, error dumps | Slow (large context) | Expensive (many tokens) | Low (noise drowns signal) | | **OpenOva Specter (AI-native)** | Typed CRD state, correlated OTel signals, known integration graph, pre-mapped failure mode | Fast (surgical context) | Cheap (minimal tokens) | High (structured signal, no noise) | You cannot retrofit this. A platform not designed for AI-manageability will always require dumping more context, spending more tokens, and getting worse results. This is a structural advantage. ### 5.4 Plain-Language Offerings (No Brand Names) These are services and capabilities described as what they are: - **Expert network** - Curated specialists across 56 open-source disciplines - **Migration services** - Moving from proprietary to open-source alternatives - **Consultancy** - Cloud-native assessment, architecture design, transformation roadmap - **Managed operations** - We own the pager, full 24/7 operational responsibility - **Staff augmentation** - SOW-based or T&M embedded engineers --- ## 6. Service Portfolio ### 6.1 Service Catalog ``` OPENOVA SERVICES │ ├── CONSULTANCY │ │ │ ├── Cloud-Native Assessment │ │ What: Gap analysis of current state, readiness evaluation │ │ Who: Organizations starting their cloud-native journey │ │ Outcome: Transformation roadmap with prioritized recommendations │ │ │ ├── Architecture Design │ │ What: Technology selection, platform architecture, DR strategy │ │ Who: CTOs and platform teams planning infrastructure │ │ Outcome: Production-ready architecture blueprint │ │ │ ├── Team Enablement │ │ What: Skills transfer, workshops, pair programming, certification prep │ │ Who: Engineering teams adopting new tools │ │ Outcome: Self-sufficient teams that can operate the platform │ │ │ ├── Migration Planning │ │ What: Migration strategy from proprietary to open-source │ │ Who: Organizations leaving OpenShift, VMware, Oracle, etc. │ │ Outcome: Risk-assessed migration plan with rollback strategy │ │ │ └── Compliance Architecture │ What: Security architecture for regulatory requirements │ Who: CISOs and compliance teams in regulated industries │ Outcome: Compliance-ready architecture (PSD2, DORA, NIS2, SOX) │ ├── PLATFORM DEPLOYMENT │ │ │ ├── OpenOva Core Deployment │ │ What: Full 56 component platform deployed to customer environment │ │ Duration: Hours to days depending on complexity │ │ Outcome: Production-grade K8s ecosystem, operational from day 1 │ │ │ ├── Cortex Deployment (AI Hub) │ │ What: Enterprise AI platform with LLM serving, RAG, agents │ │ Prerequisite: GPU nodes or Axon SaaS subscription │ │ Outcome: Self-hosted AI infrastructure │ │ │ ├── Fingate Deployment (Open Banking) │ │ What: PSD2/FAPI sandbox with banking APIs, TPP management, metering │ │ Outcome: Compliant open banking platform │ │ │ ├── Fabric Deployment (Data & Integration) │ │ What: Event-driven data pipelines, lakehouse analytics, saga orchestration │ │ Outcome: Integrated data and event streaming platform │ │ │ ├── Relay Deployment (Communication) │ │ What: Email, video, chat, WebRTC communication platform │ │ Outcome: Self-hosted enterprise communication │ │ │ └── Custom Blueprint Development │ What: Bespoke blueprints for customer-specific requirements │ Outcome: Integrated, upgrade-safe custom components │ ├── ONGOING SUPPORT & OPERATIONS │ │ │ ├── Platform Support Subscription │ │ What: Certified upgrades, blueprint updates, ticket support, SLA │ │ Includes: Specter agents (via Axon SaaS) │ │ Model: Per-vCPU-core subscription │ │ │ ├── Managed Operations │ │ What: We own the pager. Full 24/7 operational responsibility. │ │ Includes: Monitoring, incident response, upgrades, patching │ │ Model: Per-vCPU-core add-on │ │ │ ├── Specter SOC/NOC (Enhanced) │ │ What: Advanced AI operations beyond default Specter │ │ Includes: Custom agent training, industry-specific compliance rules │ │ Model: Per-vCPU-core add-on │ │ │ └── Expert Network Access │ What: On-demand access to deep specialists │ Disciplines: PostgreSQL, Cilium, Kafka, Keycloak, AI/ML, security, etc. │ Model: Hour blocks (40/80/160), unused hours roll over 1 quarter │ └── STAFF AUGMENTATION │ ├── SOW-Based Assignments │ What: Fixed deliverables, defined scope, agreed timeline │ Model: Project-based pricing │ └── T&M Dedicated Engineers What: OpenOva engineers embedded in customer team Model: Daily/hourly rate, long-term engagement ``` ### 6.2 Service Interaction Model A typical customer journey through our services: ``` DISCOVER TRANSFORM OPERATE ──────── ───────── ─────── Assessment ──────────> Platform Deployment ──────> Support Subscription (Consultancy) (Turnkey in hours) (Per-core, ongoing) │ │ │ │ Team Enablement │ │ (Skills transfer) Expert Network │ │ (On-demand) │ Custom Blueprints │ │ (If needed) Managed Ops │ (Optional) └─── Customer may enter at any point ───────────────────┘ ``` --- ## 7. Target Market ### 7.1 Market Segmentation | Segment | Size | Primary Pain | Willingness to Pay | Entry Products | |---------|------|-------------|--------------------|----| | **Banking & Financial Services** | Medium | Compliance + OSS fear + Open Banking mandate | Very High | OpenOva + Fingate + Specter | | **Telecommunications** | Medium | Legacy transformation + scale + regulation | High | OpenOva + Specter + Managed Ops | | **Government & Public Sector** | Medium | Sovereignty + compliance + budget constraints | High | OpenOva + Specter (air-gap) | | **Insurance** | Medium | Legacy modernization + regulatory | High | OpenOva + Specter | | **Energy & Utilities** | Small-Medium | OT/IT convergence + reliability | High | OpenOva + Specter | | **Mid-Market Enterprise (100-1000)** | Large | "We tried K8s and failed" | Medium | OpenOva + Support | | **Scale-ups** | Large | Outgrowing Heroku/Render | Medium | OpenOva + Cortex | | **ISVs** | Medium | Need platform for their SaaS | Medium-High | OpenOva + Custom Blueprints | ### 7.2 Banking-First Strategy Banks are the primary initial target for three reasons: 1. **High willingness to pay.** Financial institutions have budgets for infrastructure and compliance. They are accustomed to vendor relationships with SLAs. 2. **Regulatory pressure creates urgency.** PSD2, DORA, NIS2, and open banking mandates force banks to modernize. The cost of NOT acting is regulatory risk. 3. **OpenOva Fingate is a unique differentiator.** No other open-source support provider offers a turnkey, PSD2/FAPI-compliant open banking platform. **Initial pipeline:** 2 potential bank clients identified for different deal structures. **Approach for first 2-3 clients:** | Flexible On | Not Flexible On | |-------------|-----------------| | Pricing (discounted for early adopters) | Architecture quality and security posture | | Scope (custom components or integrations) | Blueprint integrity and upgrade safety | | Payment terms (extended or milestone-based) | Open source commitment (no proprietary forks) | | SLA tiers (negotiate response times) | Per-core pricing model (establish the unit) | | Engagement model (mix SOW + subscription) | Never free (discounted is fine, free devalues everything) | ### 7.3 Expansion Path ``` Phase 1: Banking (0-6 months) └── 2-3 banks, build case studies and playbook Phase 2: Regulated Verticals (6-18 months) └── Telco, government, insurance - leverage banking references Phase 3: Broader Enterprise (18-36 months) └── Mid-market, scale-ups, ISVs - self-service via Axon SaaS Phase 4: Global Scale (36+ months) └── Partner network, regional presence, marketplace ``` --- ## 8. Persona-Based Value Propositions Every organization has multiple decision-makers. Each cares about different things. OpenOva's messaging adapts to the audience. ### 8.1 CEO / CFO / Board **They control the budget. They are not technical. They care about risk, cost, and competitive advantage.** > We are in an AI gold rush. Your competitors are investing in AI-native infrastructure. Every month you wait, they pull further ahead. > > OpenOva gives you a production-grade AI-native platform in hours — not the 2-3 years it would take to build internally. Our AI brain (Specter) has pre-built knowledge of the entire ecosystem and eliminates 70%+ of your SOC/NOC staffing needs from day one. That is not a chatbot — that is autonomous operations. > > You get the economics of open source — no per-core licensing games, no vendor lock-in — with the confidence of an enterprise support relationship and AI-powered operations. > > This is not a technology purchase. This is your AI infrastructure advantage. **Key metrics for this persona:** - Time to production: hours vs. years - Cost savings: 60-85% vs. proprietary alternatives - Headcount efficiency: 70%+ SOC/NOC reduction via Specter's AI-native operations - AI readiness: infrastructure that is AI-manageable from day one - Risk reduction: SLA-backed support, certified upgrades, DR tested ### 8.2 CTO / VP Engineering **They sponsor the initiative. They understand technology at a strategic level. They care about architecture quality, team productivity, and future-proofing.** > You know the stack you want — Kubernetes, GitOps, observability, service mesh. But integrating 40+ CNCF projects into a secure, resilient, production-grade ecosystem takes 2-3 years and a team you cannot hire fast enough. > > OpenOva delivers the converged ecosystem you would build yourself — if you had the time and the team. But here is what you would not build: Specter has pre-built semantic knowledge of every CRD schema, integration dependency, and failure mode across all 56 components. It sends surgical, structured context to LLMs — not raw log dumps. This is token efficiency as an architectural moat. Your competitors who bolt AI onto unstructured platforms will spend 10x more on inference and get worse results. > > Multi-region DR with split-brain protection. Zero-trust security from day one. Full observability. And AI agents that actually understand the infrastructure they manage. > > We can be your consultant, your platform provider, or both. You choose. **Key metrics for this persona:** - 56 integrated open-source components, every one AI-manageable - Specter's semantic knowledge moat (pre-built, not learned at runtime) - Token efficiency: 10x fewer tokens than AI-bolted approaches - Multi-region DR with tested failover - Platform team headcount reduction ### 8.3 Platform Lead / DevOps Lead **They evaluate the technology. They will operate what we deploy. They care about technical depth, no lock-in, operational reality, and open-source purity.** > 56 curated, Kustomize-based blueprints. Cilium service mesh with eBPF mTLS. Flux GitOps. Grafana observability stack (Alloy, Loki, Mimir, Tempo). Kyverno policy-as-code with auto-generated PDBs and NetworkPolicies. CNPG for PostgreSQL. Strimzi/Kafka for streaming. Valkey for caching. > > Every component exposes structured CRDs. Unified OTel telemetry across the stack. Standardized health endpoints. Declarative GitOps state in Git. This is not just good engineering — it is what makes the platform AI-manageable. > > Specter reads typed CRD state and correlated OTel signals — not raw logs. It knows the integration graph, failure modes, and upgrade paths of every component. When it sends context to an LLM, it is surgical and structured. You can inspect every decision. No black box magic. Auditable, explainable AI operations. > > Full source access. No proprietary agents. Every blueprint is open source and Kustomize-based. You can read every line, fork if you want, customize what you need. **Key metrics for this persona:** - 56 components, all upstream open source, every one AI-manageable - Kustomize-based (no proprietary abstraction) - Specter decisions are inspectable and auditable - Air-gap capable - Full source access, no vendor lock-in at the code level ### 8.4 CISO / Head of Security **They are the gatekeeper. They must approve before anything moves forward. They care about compliance, security posture, auditability, and zero-trust.** > Zero-trust from Day 1. Not aspirational — actual. > > eBPF-enforced network policies via Cilium. Mutual TLS everywhere via service mesh. Kyverno auto-generates PDBs, NetworkPolicies, and security contexts. Trivy scans images in CI/CD, in Harbor registry, and at runtime. Falco for runtime eBPF threat detection. OpenBao runs as an independent Raft cluster in each region with async Performance Replication; ESO syncs secrets to workloads inside the region. SPIFFE/SPIRE issues short-lived (5-minute) workload identities. Coraza WAF with OWASP Core Rule Set. > > Air-gap capable for sovereign deployments. Compliance-ready for PSD2, DORA, NIS2, SOX. > > Specter has pre-built compliance mappings for every component — which controls map to which regulations. The Compliance Agent provides continuous posture assessment, generates audit evidence automatically, and flags deviations in real-time. The DevSecOps Agent patches vulnerabilities based on your risk tolerance policy. All Specter decisions are inspectable and auditable — no black box AI. **Key metrics for this persona:** - Zero-trust architecture from day one - Continuous compliance posture (not periodic audits) - Pre-built compliance mappings across 56 components (PSD2, DORA, NIS2, SOX) - Automated vulnerability remediation - Audit evidence auto-generated - All AI operations inspectable and auditable ### 8.5 CFO / Procurement **They negotiate the deal. They care about total cost of ownership, budget predictability, contract flexibility, and exit strategy.** > Open source blueprints are free. Forever. You are paying for support, operations, and expertise - not for access to code. > > No per-core licensing like Red Hat. No per-node charges like Rancher. No per-host billing like Datadog. Our pricing is based on the vCPU cores under management - transparent, predictable, and fair. > > Enterprise Agreement with annual true-up: commit to a baseline, grow as much as you need during the year, settle the difference at renewal. Or pay-as-you-go for maximum flexibility. > > Exit strategy: the blueprints are open source. If you leave OpenOva, you keep everything. You just lose the support, the AI operations, and the expert network. There is no lock-in by design. **Key metrics for this persona:** - 60-85% cost savings vs. proprietary alternatives - No per-core/per-node licensing games - Clean exit strategy (keep all code if you leave) - Budget predictability via ELA model --- ## 9. Competitive Landscape ### 9.1 Positioning Map ``` BREADTH OF ECOSYSTEM (Components Supported) ▲ │ OpenOva ●│ │ Big 4 ● │ ● DIY (consulting) │ (if you have │ 3 years) │ Red Hat ● │ │ │ Rancher ● │ │ Upbound ● │ Humanitec ● │ │ ────────────────┼────────────────────► │ OPERATIONAL DEPTH │ (Day-2 Support & AIOps) ``` ### 9.2 Capability Matrix | Capability | OpenOva | Red Hat OpenShift | Rancher / SUSE | Upbound | Humanitec | Big 4 Consulting | DIY | |:-----------|:-------:|:-----------------:|:--------------:|:-------:|:---------:|:-----------------:|:---:| | **PLATFORM DEPLOYMENT** | | | | | | | | | Turnkey K8s platform (hours) | Yes | Partial (weeks) | No | No | No | No (months) | No (years) | | Integrated open-source components | 56 | ~15 | ~8 | 1 | 0 | Varies | DIY | | Components tested together | Yes | Yes (their stack) | Partial | N/A | N/A | No | No | | Blueprints open source & free | Yes | No | Partial | Yes | No | No | N/A | | Multi-cloud support | Yes | Yes | Yes | Yes | Yes | Yes | DIY | | Multi-region DR built-in | Yes | Manual | Manual | No | No | Custom | DIY | | Air-gap capable | Yes | Yes | Yes | No | No | Custom | DIY | | **NETWORKING & SECURITY** | | | | | | | | | eBPF service mesh | Yes (Cilium) | No (Istio sidecar) | No | No | No | Varies | DIY | | Zero-trust (mTLS, L7 policies) | Built-in | Yes (Istio) | Manual | No | No | Custom | DIY | | WAF (OWASP CRS) | Built-in (Coraza) | No | No | No | No | Custom | DIY | | GSLB / DNS failover | Built-in (PowerDNS lua-records) | No | No | No | No | Custom | DIY | | Split-brain protection | Built-in | No | No | No | No | Custom | DIY | | Policy-as-code | Built-in (Kyverno) | Partial (SCC) | No | No | No | Custom | DIY | | Security scanning (CI + runtime) | Built-in (Trivy) | ACS (paid) | No | No | No | Custom | DIY | | Secrets management | Built-in (OpenBao + ESO) | Partial | No | No | No | Custom | DIY | | **OBSERVABILITY** | | | | | | | | | Full stack (logs/metrics/traces) | Built-in (Grafana) | Partial | Partial | No | No | Custom | DIY | | OTel auto-instrumentation | Built-in | No | No | No | No | Custom | DIY | | **GITOPS & DEVELOPER PLATFORM** | | | | | | | | | GitOps engine | Built-in (Flux) | ArgoCD (add-on) | Fleet | No | No | Custom | DIY | | Internal Git server | Built-in (Gitea) | No | No | No | No | No | DIY | | Developer portal | Built-in (Catalyst console) | RHDH (paid) | No | No | Score | Custom | DIY | | Supply chain security | Built-in (Sigstore + Syft/Grype) | Partial | No | No | No | Custom | DIY | | CI/CD | Built-in (Gitea Actions) | Tekton | No | No | No | Custom | DIY | | **DATA SERVICES** | | | | | | | | | PostgreSQL operator | Yes (CNPG) | Crunchy (paid) | No | No | No | Custom | DIY | | FerretDB (MongoDB-compatible) | Yes | No | No | No | No | Custom | DIY | | Apache Kafka streaming | Yes (Strimzi) | AMQ Streams | No | No | No | Custom | DIY | | Redis-compatible cache | Yes (Valkey) | No | No | No | No | Custom | DIY | | **VERTICAL SOLUTIONS** | | | | | | | | | Open Banking (PSD2/FAPI) | Yes (Fingate) | No | No | No | No | Custom ($$$) | No | | Enterprise AI Hub | Yes (Cortex) | RHOAI (paid) | No | No | No | Custom ($$$) | No | | Data & Integration | Yes (Fabric) | No | No | No | No | Custom ($$$) | No | | Enterprise Communication | Yes (Relay) | No | No | No | No | Custom ($$$) | No | | **AI OPERATIONS** | | | | | | | | | AI-powered SOC/NOC | Yes (Specter) | No | No | No | No | No | No | | Self-healing agents | Yes | No | No | No | No | No | No | | Continuous compliance posture | Yes | ACS (paid) | No | No | No | Custom | No | | Predictive failure detection | Yes | No | No | No | No | No | No | | Automated remediation | Yes | No | No | No | No | No | No | | **AI-NATIVE ARCHITECTURE** | | | | | | | | | Pre-built semantic knowledge of ecosystem | Yes (56 components) | No | No | No | No | No | No | | Token-efficient AI operations | Yes (surgical context) | No | No | No | No | No | No | | AI-manageable components (structured CRDs + unified telemetry) | Yes (by design) | Partial | No | Partial | No | No | DIY | | **SERVICES** | | | | | | | | | Transformation consultancy | Yes | Via partners | No | No | No | Yes | No | | Managed operations | Yes | Via partners | No | No | No | Yes | No | | Expert network (56 OSS) | Yes | RHEL stack only | K3s/RKE only | Crossplane only | No | Generalist | No | | SOW / T&M augmentation | Yes | No | No | No | No | Yes | N/A | | Skills transfer & enablement | Yes | Training courses | No | No | No | Yes | N/A | | **PRICING & FREEDOM** | | | | | | | | | Open source (free to use) | Yes | No | Partial | Yes | No | N/A | Yes | | No per-core/per-node licensing | Yes | No (per-core) | No (per-node) | No (per-resource) | No (per-deploy) | Per-hour | N/A | | Support-only subscription model | Yes | Bundled | Bundled | Bundled | Bundled | N/A | N/A | | Clean exit (keep everything) | Yes | No | Partial | Yes | No | Yes | N/A | ### 9.3 Competitive Advantages by Competitor **vs. Red Hat OpenShift:** OpenShift is a walled garden. Per-core licensing that escalates with scale. A curated but narrow stack (~15 components). Istio sidecars instead of eBPF. No AI-native operations — any AI they add will be bolted onto an architecture not designed for AI-manageability. OpenOva offers broader ecosystem support (56 vs ~15 components), no code lock-in, per-core pricing without the premium markup, and Specter with pre-built semantic knowledge of the entire ecosystem. **vs. Rancher / SUSE:** Rancher is a cluster management tool, not an integrated platform. It helps you manage Kubernetes, but you still build the platform yourself. No integrated GitOps, no observability stack, no policy engine, no DR automation. No AI-native operations — you would need to build and train AI on an ad-hoc collection of components. OpenOva is the complete AI-native platform, not just the management layer. **vs. Upbound / Crossplane ecosystem:** Upbound focuses on one tool (Crossplane). OpenOva uses Crossplane as one of 56 components. We don't compete with Crossplane - we include it and support it alongside 50+ other projects. **vs. Humanitec:** Humanitec is a platform orchestrator focused on developer experience and the Score specification. It does not provide infrastructure components, security, observability, or operational support. It is complementary in concept but narrow in scope. **vs. Big 4 Consulting (Deloitte, Accenture, etc.):** Consulting firms sell hours. They build custom solutions that only they understand. When the engagement ends, the client is left with a bespoke platform and no ongoing support — and no AI-native operations. They cannot provide Specter's pre-built semantic knowledge because they build something different for every client. OpenOva delivers standardized blueprints (maintainable), stays for operations (ongoing relationship), and provides AI agents with pre-built knowledge (continuous value). Our blueprints and semantic models are our IP — not billable hours. **vs. DIY / In-House:** DIY gets you full freedom but takes 2-3 years, requires a 5-10 person platform team ($600K-1.2M/year), and produces an untested, undocumented, single-point-of-failure platform — with zero AI-native operations. Building Specter's semantic knowledge in-house would take additional years and a dedicated AI team. OpenOva delivers the same outcome in hours, tested, documented, with AI-native operations and ongoing support. The ROI is immediate. --- ## 10. Business Model & Pricing ### 10.1 Revenue Streams ``` REVENUE STREAMS │ ├── RECURRING (Predictable) │ ├── Platform Support Subscription (per-core) │ ├── Managed Operations (per-core add-on) │ ├── Specter Enhanced (per-core add-on) │ ├── Axon SaaS (per-core, included in base or metered) │ └── Expert Network Retainer (hour blocks) │ ├── PROJECT-BASED (Variable) │ ├── Platform Deployment (one-time) │ ├── Consultancy Engagements (SOW) │ ├── Migration Projects (SOW) │ └── Custom Blueprint Development (SOW) │ ├── STAFF AUGMENTATION (Variable) │ ├── T&M Embedded Engineers │ └── SOW-Based Assignments │ └── FRANCHISE (Recurring, Indirect) ├── Per-vCPU subscription on every franchised Sovereign (same per-core │ model as direct customers — the franchisee passes it through to │ their tenants and OpenOva's share is computed off the gross) └── Bilateral revenue-split contract per Franchisee (Omantel, regional resellers, hyperscaler partners) ``` ### 10.2 Core Principle **Blueprints are free and open source. Always.** We never charge for access to code. Revenue comes from: - Support and operational guarantees (the insurance policy) - AI-powered operations (Specter/Axon) - Expert access (the human network) - Managed services (we own the pager) - Transformation consulting (the journey) **All software is free.** We do not charge for any software component. The entire 56 component platform is open source and free to deploy. Revenue is exclusively from per-vCPU-core platform support subscriptions. No per-component charges. No software licensing fees. Ever. ### 10.3 Pricing Unit: vCPU Cores Under Management The per-vCPU-core model is fair, transparent, and scales naturally with the customer's footprint. As they grow, we grow. As they scale down, the cost follows. **Why per-core:** - Directly correlates with infrastructure complexity and operational burden - Industry-standard unit (familiar to procurement) - Easy to measure and audit - Scales linearly with actual usage ### 10.4 Contract Models #### Enterprise License Agreement (ELA) For organizations that want budget predictability and freedom to grow: - **Commit** to a minimum core count for 1-3 year term - **Grow** as much as needed during the subscription period - **True-up** at renewal based on actual peak usage during the period - **Benefit:** volume pricing locked in, budget certainty, growth without friction ``` Example: ELA signed for 100 cores at €X/core/month During Year 1, customer grows to 180 cores At renewal: true-up to 180 cores, new pricing locked Customer never receives surprise invoices mid-year ``` #### Pay As You Go (PAYG) For organizations that want maximum flexibility: - **Monthly** billing based on actual core count - **No commitment**, no minimum term - **Higher** per-core price (premium for flexibility) - **Best for:** proof of concepts, seasonal workloads, evaluation period #### SOW (Statement of Work) For one-time engagements: - **Fixed deliverables**, defined scope, agreed timeline and price - **Used for:** consultancy, migration projects, custom blueprints - Often converts to recurring subscription after project completion #### T&M (Time & Materials) For ongoing embedded engineering: - **Daily or hourly rate** for dedicated OpenOva engineers - **Embedded** in customer team, working on customer priorities - **Long-term** engagements (months to years) ### 10.5 Service Add-Ons | Add-On | Basis | Description | |--------|-------|-------------| | Managed Operations | Per-core | OpenOva owns the pager, 24/7 operational responsibility | | Specter Enhanced | Per-core | Custom agent training, industry-specific compliance rules | | Product Support (Fingate) | Per-core | Open Banking specific support and upgrades | | Product Support (Cortex) | Per-core | AI Hub specific support, model management | | Product Support (Fabric) | Per-core | Data & Integration specific support | | Product Support (Relay) | Per-core | Communication platform support | | Expert Network Hours | Block pricing | 40/80/160 hour blocks, unused hours roll 1 quarter | ### 10.6 Pricing Principles | Principle | Rationale | |-----------|-----------| | **Per-core, not per-component** | Customer shouldn't pay more for using more open-source tools. 56 components for the price of one subscription. | | **Minimum ELA cap** | Ensures baseline revenue per customer. Below minimum, PAYG is available. | | **True-up, not penalty** | Customer grows freely. True-up at renewal is a conversation, not a surprise bill. | | **Expert hours roll over** | Builds trust. Customer doesn't lose unused hours (within 1 quarter). | | **Never free, but flexible** | Early adopters get discounts. No customer gets free. Free devalues everything. | | **Clean exit** | If customer leaves, they keep all blueprints and code. Only lose support, Specter, and expert access. | | **Customer advocacy is DNA** | We never trap customers. Exit strategy = do nothing. Blueprints are open source. Walk away and everything keeps running. | ### 10.7 Franchise Revenue Model The per-vCPU subscription is the primary OpenOva revenue surface and applies to every Sovereign — direct (`openova` runs it for SaaS Organizations) or franchised (`omantel`, regional resellers, hyperscaler partners). The voucher is **not** a separate revenue stream; it is the **user-acquisition surface** that Franchisees use to convert their existing customer base into Catalyst tenants. | Surface | Owner | Pricing basis | Revenue flow | |---|---|---|---| | Per-vCPU subscription | OpenOva | Per-core, ELA or PAYG | Stripe charge per Sovereign rolls up to OpenOva monthly | | Voucher issuance | Franchisee (`sovereign-admin`) | Free to mint; the credit comes off the Franchisee's revenue share | No money moves at issuance — only at first-checkout redemption | | Voucher redemption | Tenant Organization | Credit applied at checkout (existing `promo_code` field on `/billing/checkout`) | Order amount drops to zero or near-zero; Stripe charge is suppressed for the credit-covered portion | | Tenant billing | Tenant Organization | Standard per-vCPU once credit is exhausted | Stripe charge resumes; OpenOva's share computed off the gross | **Why this matters for franchise economics:** - The Franchisee can market a "100 OMR free credit" promo to drive signups without OpenOva participating in the marketing campaign or bearing the credit cost. The credit comes off the Franchisee's share, not OpenOva's. - OpenOva's revenue model stays uniform. There is no "voucher tier" or "promo SKU" to maintain — every voucher resolves to ordinary credit on an ordinary Order, going through the same Stripe pipeline that direct OpenOva customers use. - The Franchisee's own Tenants on their Sovereign pay them through the same per-vCPU surface. The Franchisee sets their pass-through rate (e.g. they buy from OpenOva at €X/core, sell to their SMEs at €Y/core where Y ≥ X). This margin is the Franchisee's primary income; vouchers are a discount instrument the Franchisee chooses to deploy. - Revenue split between OpenOva and each Franchisee is governed by a bilateral contract. The split is **NOT** encoded as a per-Sovereign config field — it lives in OpenOva's accounting system, not in the Catalyst code. Stripe charges on franchised Sovereigns carry a `sovereign=` metadata tag; OpenOva's billing rollup queries those charges and pays out monthly. **What does NOT change for franchised Sovereigns:** - The same `core/admin` UI ships with every Sovereign. Voucher issuance is a `sovereign-admin` action, gated by the same role check that governs the rest of the admin surface. - The same `core/services/billing` Postgres schema runs on every Sovereign. There is no separate "franchise database." - The same Stripe integration handles checkout. Vouchers do not bypass Stripe — they reduce the line total before Stripe is invoked. See [`FRANCHISE-MODEL.md`](FRANCHISE-MODEL.md) for the redemption flow end-to-end. --- ## 11. Go-to-Market Strategy ### 11.1 Phase 1: Banking Beachhead (Month 0-6) **Objective:** Land 2-3 bank clients. Build case studies. Validate pricing model. **Actions:** - Develop bank-specific pitch materials (persona-targeted) - Build compliance mapping documents (PSD2, DORA, NIS2, SOX) - Prepare banking reference architecture - Finalize 2-3 flexible deal structures - Close first 2 bank deals - Document everything as a repeatable playbook **Success criteria:** 2 signed contracts, at least 1 deployment in production. ### 11.2 Phase 2: Regulated Verticals (Month 6-18) **Objective:** Expand to 10+ clients across banking, telco, government, insurance. **Actions:** - Publish anonymized banking case studies - Speak at industry conferences (KubeCon, fintech events, CNCF meetups) - Develop vertical-specific landing pages - Build partner relationships with regional system integrators - Hire first dedicated sales/pre-sales resources - Formalize expert network with partner contracts **Success criteria:** 10+ active subscriptions, validated ELA pricing model. ### 11.3 Phase 3: Broader Market (Month 18-36) **Objective:** Scale to 50+ clients. Product-led growth supplements sales-led. **Actions:** - Self-service Axon (SaaS) available for evaluation - Community edition of Specter for self-service customers - Partner ecosystem (regional SIs, technology partners) - Content marketing at scale (blog, YouTube, conference talks) - Expand to mid-market and scale-up segments **Success criteria:** 50+ active subscriptions, recurring revenue dominates project revenue. ### 11.4 Phase 4: Global Scale (Month 36+) **Objective:** Become the definitive enterprise open-source support company. **Actions:** - Regional offices or partner coverage in major markets - Blueprint marketplace (community and partner contributions) - Training and certification program - Expanded meta-blueprint portfolio (new verticals) - Potential investor/funding round for acceleration **Success criteria:** Multi-million dollar recurring revenue, industry recognition. ### 11.5 Sales Motion by Phase | Phase | Primary Motion | Revenue Mix | |-------|---------------|-------------| | Phase 1 | Founder-led sales | 80% project / 20% subscription | | Phase 2 | Sales-assisted | 50% project / 50% subscription | | Phase 3 | Sales + product-led | 30% project / 70% subscription | | Phase 4 | Product-led + partner | 20% project / 80% subscription | --- ## 12. The OpenOva Expert Network ### 12.1 Purpose The expert network is one of OpenOva's most important differentiators. Organizations adopting 56 open-source projects need access to specialists who have deep expertise in each technology - not generalists who have surface-level knowledge of everything. OpenOva provides a single relationship that connects customers to verified specialists across the entire CNCF ecosystem and beyond. ### 12.2 Structure ``` EXPERT NETWORK │ ├── Core Team (OpenOva Employees) │ ├── Platform Architects │ ├── SRE / DevOps Engineers │ ├── AI / ML Engineers │ └── Security Specialists │ ├── Contracted Specialists (Vetted Partners) │ ├── PostgreSQL / CNPG Deep Experts │ ├── Cilium / eBPF Specialists │ ├── Kafka / Strimzi Engineers │ ├── Keycloak / Identity Architects │ ├── AI / ML Scientists │ ├── Security / Compliance Consultants │ ├── Grafana / Observability Engineers │ └── Industry Specialists (Banking, Telco, etc.) │ └── Verification & Quality ├── OpenOva-certified specialists ├── Validated on real customer deployments ├── Performance tracked and reviewed └── SLA-bound response times ``` ### 12.3 Engagement Models | Model | Description | Use Case | |-------|-------------|----------| | **Advisory Call** | 1-2 hour expert consultation | Quick architecture review, technology decision | | **Deep Dive** | Multi-day investigation or optimization | Performance tuning, incident post-mortem, DR testing | | **Embedded Expert** | Weeks or months on customer team | Major migration, platform build, capability transfer | | **Emergency Response** | 24/7 escalation to on-call specialist | Production incident requiring deep expertise | ### 12.4 Disciplines Covered The expert network covers the full breadth of the OpenOva ecosystem: **Infrastructure:** Kubernetes, OpenTofu, Crossplane, Cilium, Calico, K3s, containerd **GitOps:** Flux, ArgoCD, Gitea, Git workflows, CI/CD design **Data:** PostgreSQL (CNPG), FerretDB, Strimzi (Kafka), Valkey (Redis), ClickHouse, Debezium **Security:** OpenBao, cert-manager, Kyverno, OPA, Trivy, Falco, OpenSearch SIEM, zero-trust architecture **Observability:** Grafana, Loki, Mimir, Tempo, OpenTelemetry, Prometheus, Hubble **Networking:** Cilium, eBPF, PowerDNS authoritative + lua-records, load balancing, service mesh **Identity:** Keycloak, OIDC, OAuth 2.0, FAPI, SAML **AI/ML:** vLLM, KServe, Milvus, LangChain, RAG architectures, model optimization **Compliance:** PSD2, DORA, NIS2, SOX, GDPR, banking regulation --- ## 13. Migration Program ### 13.0 Migration Philosophy: The Airlines Analogy Migration is not lift-and-shift. It is comprehensive modernization — including AI modernization. Think of an airline replacing its fleet. You do not ground every plane at once. You keep flying while systematically replacing older aircraft with modern ones. Passengers barely notice. Routes continue. Service improves gradually. OpenOva Exodus works the same way. We do not rip out your existing infrastructure overnight. We run new and old in parallel, migrate workloads incrementally, validate at every step, and decommission legacy only when the new platform is proven. Zero downtime. Zero disruption. Full modernization. But Exodus goes further than traditional migration programs. We assess your entire technology landscape — not just what to swap, but what to modernize for the AI age. Which systems need AI-native infrastructure? Where are you blocked from adopting AI operations because your platform was never designed for it? What is the AI readiness of your current architecture? Exodus provides the comprehensive assessment and roadmap, not just the tool swap. ### 13.1 Overview Many organizations are trapped in proprietary ecosystems that are expensive, restrictive, and create the very vendor lock-in they wanted to avoid. OpenOva provides structured migration paths from proprietary platforms, databases, observability tools, and security products to open-source alternatives. ### 13.2 Migration Paths #### Platform Migrations | From | To | Key Challenges | |------|----|----------------| | Red Hat OpenShift | OpenOva (K3s + Cilium + Flux) | Operator compatibility, SCC → Kyverno, Routes → Gateway API | | VMware Tanzu | OpenOva | Container migration, NSX → Cilium, vSphere dependency removal | | Amazon EKS / Google GKE / Azure AKS | OpenOva (self-hosted) | Cloud service dependency mapping, IAM → Keycloak, managed DB → operators | | Legacy VMs | OpenOva (containerized) | Application containerization, state management, networking | #### Database Migrations | From | To | Key Challenges | |------|----|----------------| | Oracle Database | CNPG (PostgreSQL) | Schema conversion, PL/SQL → PL/pgSQL, performance tuning | | Redis Enterprise | Valkey | Command compatibility (near-complete), module alternatives | | Confluent Kafka | Strimzi (Apache Kafka) | Protocol-compatible, but configuration and tooling differences | | MongoDB Atlas | FerretDB on CNPG (PostgreSQL) | MongoDB wire protocol compatibility, data migration, connection string changes | | Amazon RDS | CNPG (PostgreSQL) | WAL streaming setup, connection migration, backup strategy change | #### Observability Migrations | From | To | Annual Savings | |------|----|----------------| | Datadog | Grafana Stack (Loki + Mimir + Tempo + Grafana) | €200-400K | | Splunk | Loki + Grafana | €150-300K | | New Relic | OTel + Grafana | €100-250K | | Dynatrace | OTel + Mimir + Tempo + Grafana | €150-350K | #### Security & Identity Migrations | From | To | Annual Savings | |------|----|----------------| | Auth0 | Keycloak | €50-100K | | Okta | Keycloak | €50-150K | #### CI/CD Migrations | From | To | |------|----| | GitHub Actions | Gitea Actions (compatible syntax) | | GitLab CI | Gitea Actions | | Jenkins | Gitea Actions | | CircleCI | Gitea Actions | #### Runtime Security Migrations | From | To | Annual Savings | |------|----|----------------| | Prisma Cloud | Falco + OpenSearch SIEM + Kyverno + Specter | €100-200K | | Aqua Security | Falco + OpenSearch SIEM + Kyverno + Specter | €80-150K | | CrowdStrike Falcon | Falco + OpenSearch SIEM + Specter | €100-250K | ### 13.3 Migration Methodology Every migration follows a structured approach: ``` 1. ASSESS (Week 1-2) ├── Inventory current state ├── Map dependencies ├── Identify risks and blockers └── Estimate effort and timeline 2. PLAN (Week 2-3) ├── Design target architecture ├── Define migration sequence ├── Build rollback strategy └── Agree success criteria 3. PILOT (Week 3-6) ├── Migrate non-critical workload ├── Validate functionality ├── Performance comparison └── Team training on new stack 4. MIGRATE (Week 6-12) ├── Phased migration of production workloads ├── Parallel running during transition ├── Continuous validation └── Rollback if needed 5. OPTIMIZE (Week 12+) ├── Performance tuning ├── Cost optimization ├── Team enablement └── Decommission legacy ``` ### 13.4 AI Modernization Assessment Exodus includes a comprehensive AI modernization assessment — evaluating not just what to migrate, but what to modernize for AI-native operations. | Assessment Area | What We Evaluate | Output | |----------------|-----------------|--------| | **Infrastructure AI-readiness** | Are your components exposing structured CRDs? Is telemetry unified? Are health endpoints standardized? | AI-readiness score per component | | **Operations AI-readiness** | Are your runbooks codified? Are incident patterns documented? Is remediation automatable? | AI operations opportunity map | | **Data pipeline AI-readiness** | Can your data systems feed AI models? Are schemas structured for machine consumption? | Data modernization roadmap | | **Security AI-readiness** | Are your policies machine-readable? Can compliance checks be automated? | Security automation roadmap | | **Cost efficiency** | What would AI-native operations save vs. current manual/scripted approach? | ROI projection for AI-native migration | This assessment is unique to Exodus. Traditional migration programs swap tools. Exodus modernizes your entire infrastructure for the AI age. --- ## 14. ROI & Total Cost of Ownership ### 14.1 The Cost of the Status Quo Organizations currently pay for a fragmented set of proprietary tools, platforms, and staffing: | Cost Category | Typical Annual Cost | |---------------|-------------------| | Platform licensing (OpenShift, Tanzu) | €150-300K | | Observability SaaS (Datadog, Splunk, New Relic) | €200-400K | | Database licensing (Oracle, Redis Enterprise, Confluent) | €100-300K | | Security tooling (Prisma, Aqua, PagerDuty) | €100-200K | | Identity SaaS (Auth0, Okta) | €50-150K | | Platform team (5-10 engineers) | €500K-1.2M | | SOC/NOC team (5-10 engineers) | €500K-1M | | External consulting (Big 4, boutique) | €200-500K | | **Total annual cost** | **€1.8M-4.05M** | ### 14.2 The OpenOva Alternative | Cost Category | OpenOva Annual Cost | |---------------|-------------------| | OpenOva platform support (subscription) | €50-200K | | Managed operations (optional) | €50-150K | | Expert network hours | €30-100K | | Initial deployment + migration (one-time, amortized) | €30-100K | | Reduced platform team (2-3 engineers instead of 5-10) | €200-400K | | SOC/NOC (Specter replaces 70%+) | €100-200K | | **Total annual cost** | **€460K-1.15M** | ### 14.3 Savings Summary | Metric | Traditional | OpenOva | Savings | |--------|-------------|---------|---------| | Annual infrastructure + tooling | €600K-1.35M | €130-450K | 60-85% | | Annual staffing (platform + SOC/NOC) | €1M-2.2M | €300-600K | 65-75% | | Time to production platform | 2-3 years | Hours-days | 99% | | Time to production DR | 6-12 months | Hours | 99% | | Annual consulting spend | €200-500K | Included | 100% | ### 14.4 ROI Example: Mid-Size Bank (96 cores, 2 regions) ``` CURRENT STATE (estimated annual cost): OpenShift licensing: €200,000 Datadog: €250,000 Redis Enterprise: €80,000 Confluent Kafka: €120,000 Auth0: €60,000 Platform team (5 engineers): €600,000 SOC/NOC team (5 engineers): €500,000 ────────────────────────────────────── Total: €1,810,000/year WITH OPENOVA (estimated annual cost): OpenOva subscription (96 cores): €XX,000 Expert network hours (80hr/qtr): €XX,000 Platform team (2 engineers): €240,000 SOC/NOC (Specter + 1 engineer): €120,000 ────────────────────────────────────── Total: €XXX,000/year ANNUAL SAVINGS: €1,000,000+ PAYBACK PERIOD: < 3 months ``` *Note: Exact pricing to be finalized based on market validation with first 2-3 bank clients.* --- ## 15. Community & Ecosystem ### 15.1 Community Strategy OpenOva's open-source blueprints create a natural community funnel. The strategy follows a proven open-source model: community builds awareness and trust, enterprise support captures revenue. ### 15.2 Platforms | Platform | Purpose | Content | |----------|---------|---------| | **GitHub** | Blueprint repository, issues, discussions | Source code, release notes, contribution guide | | **Discord** | Real-time community, help channels | Tech support, architecture discussions, announcements | | **LinkedIn** | Thought leadership, professional network | Case studies, industry insights, hiring | | **YouTube** | Technical demos, tutorials, deep-dives | Architecture walkthroughs, deployment tutorials, expert interviews | | **Dev.to / Hashnode** | Technical blog posts | How-to articles, comparisons, best practices | | **CNCF Slack** | Ecosystem participation | Contributing to relevant channels | ### 15.3 Content Cadence | Frequency | Content Type | |-----------|-------------| | Weekly | Technical blog post (Cilium patterns, CNPG tips, Specter use cases) | | Bi-weekly | YouTube tutorial or demo | | Monthly | "State of the Stack" newsletter | | Quarterly | Webinar or AMA with expert network specialists | | Annually | OpenOva Summit (virtual initially, physical when scale justifies) | ### 15.4 Community-to-Customer Funnel ``` AWARENESS └── Discover blueprints on GitHub / blog post / conference talk ENGAGEMENT └── Join Discord for help / Star repository / Try deployment EVALUATION └── Attend webinar / Book consultancy assessment CONVERSION └── Sign platform support subscription EXPANSION └── Add Specter enhanced / Expert network / Managed operations / Fingate / Cortex ADVOCACY └── Conference talks / Case studies / Community contributions ``` --- ## 16. Growth Roadmap ### 16.1 Revenue Milestones | Phase | Timeline | Clients | ARR Target | Team Size | |-------|----------|---------|------------|-----------| | **Seed** | Month 0-6 | 2-3 | €100-300K | 3-5 | | **Traction** | Month 6-18 | 10-15 | €500K-1.5M | 8-15 | | **Scale** | Month 18-36 | 30-50 | €2-5M | 20-40 | | **Growth** | Month 36-60 | 100+ | €10M+ | 50-100 | ### 16.2 Product Milestones | Phase | Product Maturity | |-------|-----------------| | **Seed** | Core platform deployed at 2-3 banks. Specter (basic) operational. Pricing validated. | | **Traction** | Fingate production-ready. Cortex available. Specter agents trained on banking patterns. Expert network formalized. | | **Scale** | Self-service deployment via wizard. Axon SaaS generally available. Partner channel active. | | **Growth** | Blueprint marketplace. Certification program. Multiple products (Fabric, Relay, and more). Global partner network. | ### 16.3 Hiring Priorities | Phase | Key Hires | |-------|-----------| | **Seed** | Senior platform engineers (builders), 1 business development | | **Traction** | Pre-sales engineer, DevRel/community, additional platform engineers | | **Scale** | Sales team, customer success, product management, Specter AI engineers | | **Growth** | Regional leads, partner management, training & certification team | ### 16.4 Key Risks & Mitigations | Risk | Likelihood | Impact | Mitigation | |------|-----------|--------|------------| | First bank deals take longer than expected | Medium | High | Pipeline diversity - pursue 4-5 prospects simultaneously | | Pricing model doesn't match market expectations | Medium | Medium | Flexibility for first 2-3 clients, iterate quickly | | Competitor copies blueprint approach | Low | Medium | Execution speed, expert network depth, Specter differentiation | | Key technical person leaves | Medium | High | Document everything, distribute knowledge, expert network reduces dependency | | Open-source components change license | Low | High | Monitor licenses, maintain fork-ready posture, Valkey precedent | | Customer churns after first year | Medium | Medium | Specter creates operational dependency, expert network creates relationship stickiness | --- *This is a living document. Update as strategy evolves, market feedback is received, and decisions are validated with real customers.* *Last updated: 2026-04-28*