Technology forecast and strategic review restructure: - Remove 13 components (backstage, mongodb, activemq, vitess, airflow, camel, dapr, superset, searxng, langserve, trino, lago, rabbitmq) - Add 10 components (sigstore, syft-grype, nemo-guardrails, langfuse, reloader, matrix, ferretdb, litmus, livekit, coraza) - Rename product: Synapse → Axon (SaaS LLM Gateway) - Merge products: Titan + Fuse → Fabric (Data & Integration) - New product: Relay (Communication) - Replace Backstage with Catalyst IDP - Replace MongoDB with FerretDB (MongoDB wire protocol on CNPG) - Add supply chain security (Sigstore/Cosign, Syft+Grype) - Add AI safety and observability (NeMo Guardrails, LangFuse) - Add technology forecast 2027-2030 document - Full verification pass: zero stale references across all docs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| README.md | ||
OpenOva Fabric
Event-driven data integration and lakehouse analytics platform.
Status: Accepted | Updated: 2026-02-26
Overview
OpenOva Fabric merges data lakehouse and microservices integration into a single product. It provides event-driven data pipelines, stream processing, saga orchestration, and analytics — replacing the former separate Titan and Fuse products.
flowchart TB
subgraph Sources["Data Sources"]
CDC[Debezium CDC]
Events[Event Producers]
end
subgraph Streaming["Event Streaming"]
Kafka[Strimzi/Kafka]
end
subgraph Processing["Stream Processing"]
Flink[Apache Flink]
end
subgraph Orchestration["Workflow Orchestration"]
Temporal[Temporal]
end
subgraph Storage["Data Storage"]
Iceberg[Apache Iceberg]
ClickHouse[ClickHouse]
MinIO[MinIO S3]
end
Sources --> Streaming
Streaming --> Processing
Streaming --> Orchestration
Processing --> Storage
Orchestration --> Streaming
Components
All components are in platform/ (flat structure):
| Component | Purpose | Location |
|---|---|---|
| strimzi | Apache Kafka event streaming | platform/strimzi |
| flink | Stream and batch processing | platform/flink |
| temporal | Saga orchestration + compensation | platform/temporal |
| debezium | Change data capture (CDC) | platform/debezium |
| iceberg | Open table format (lakehouse) | platform/iceberg |
| clickhouse | OLAP analytics database | platform/clickhouse |
| minio | Object storage (S3) | platform/minio |
Use Cases
Event-Driven Integration
Source DB → Debezium CDC → Kafka → Flink Processing → Target DB/Iceberg
Saga Orchestration
Temporal Workflow → Step 1 (Kafka) → Step 2 (Kafka) → Compensation on failure
Real-Time Analytics
Kafka → Flink → ClickHouse → Grafana Dashboards
Data Lakehouse
Kafka → Flink → Iceberg (MinIO) → SQL queries via ClickHouse
Resource Requirements
| Component | Replicas | CPU | Memory |
|---|---|---|---|
| Strimzi/Kafka | 3 | 2 | 8Gi |
| Flink JobManager | 1 | 1 | 2Gi |
| Flink TaskManager | 2 | 2 | 4Gi |
| Temporal | 3 | 1 | 2Gi |
| ClickHouse | 2 | 4 | 16Gi |
| Debezium | 1 | 0.5 | 1Gi |
| Total | - | 14.5 | 41Gi |
Deployment
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: fabric
namespace: flux-system
spec:
interval: 10m
path: ./products/fabric/deploy
prune: true
sourceRef:
kind: GitRepository
name: openova-blueprints
Configuration
| Parameter | Description | Default |
|---|---|---|
TENANT |
Tenant identifier | Required |
DOMAIN |
Base domain | Required |
KAFKA_REPLICAS |
Kafka broker count | 3 |
FLINK_PARALLELISM |
Flink task parallelism | 2 |
CLICKHOUSE_SHARDS |
ClickHouse shard count | 1 |
Part of OpenOva