Component-level architectural correction (two changes): 1. MinIO → SeaweedFS as unified S3 encapsulation layer The old design used MinIO for in-cluster S3 plus separate cold-tier configuration scattered across consumers. The new design positions SeaweedFS as the single S3 encapsulation layer: every Catalyst component talks to one endpoint (seaweedfs.storage.svc:8333). SeaweedFS internally handles hot tier (in-cluster NVMe), warm tier (in-cluster bulk), and cold tier (transparent passthrough to cloud archival storage — Cloudflare R2 / AWS S3 / Hetzner Object Storage / etc., chosen at Sovereign provisioning). One audit/lifecycle/encryption boundary instead of N. No Catalyst component talks to cloud S3 directly anymore — Velero, CNPG WAL archive, OpenSearch snapshots, Loki/Mimir/Tempo, Iceberg, Harbor blob store, Application buckets all share one S3 surface. 2. Apache Guacamole added as Application Blueprint §4.5 Communication Clientless browser-based RDP/VNC/SSH/kubectl-exec gateway. Keycloak SSO, full session recording to SeaweedFS for compliance evidence (PSD2/DORA/SOX). Composed into bp-relay. Replaces VPN+native-client distribution for auditable remote access. Component changes: - DELETED: platform/minio/ - CREATED: platform/seaweedfs/README.md (unified S3 + cold-tier encapsulation; bucket layout; multi-region replication via shared cold backend; migration-from-MinIO section) - CREATED: platform/guacamole/README.md (clientless remote-desktop gateway; GuacamoleConnection CRD; compliance integration via session recordings) Doc updates: PLATFORM-TECH-STACK §1+§3.5+§4.5+§5+§7.4; TECHNOLOGY-FORECAST L11+mandatory+a-la-carte counts (52 → 53); ARCHITECTURE §3 topology; SECURITY §4 DB engines; SOVEREIGN-PROVISIONING §1 inputs; SRE §2.5+§7; IMPLEMENTATION-STATUS §3; BLUEPRINT-AUTHORING stateful examples; BUSINESS-STRATEGY 13 component-count anchors + Relay product line; README.md backup row; CLAUDE.md folder count. Component README updates (S3 endpoint + dependency renames): cnpg, clickhouse, flink, gitea, iceberg, harbor, grafana, livekit, kserve, milvus, opensearch, flux, stalwart, velero (substantive rewrite of velero — now writes exclusively to SeaweedFS with cold-tier auto-routing). Products: relay, fabric. UI scaffold: products/catalyst/bootstrap/ui/src/shared/constants/components.ts — minio entry replaced with seaweedfs; velero+harbor deps updated; new guacamole entry added. VALIDATION-LOG entry "Pass 104 — MinIO → SeaweedFS swap + Guacamole add" captures the encapsulation principle and adds Lesson #22: storage tier policy belongs at the encapsulation boundary, not inside every consumer. Verification: zero remaining MinIO references in canonical docs (one intentional retention in TECHNOLOGY-FORECAST L37 explaining the swap); 53 platform/ folders matching all "53 components" anchors; bp-relay composition includes guacamole. |
||
|---|---|---|
| .. | ||
| README.md | ||
CNPG (CloudNative PostgreSQL)
Production-grade PostgreSQL operator. Application Blueprint (see docs/PLATFORM-TECH-STACK.md §4.1 — Data services). Used by Organizations that want managed Postgres; also the underlying engine for FerretDB (MongoDB-compatible) and Gitea metadata. Replication via WAL streaming to async standby (Application-tier choice).
Status: Accepted | Updated: 2026-04-27
Overview
CloudNative PostgreSQL (CNPG) provides production-grade PostgreSQL with:
- Kubernetes-native operator
- WAL streaming for multi-region DR
- Automated backups to SeaweedFS/S3
- High availability with automatic failover
Architecture
Single Region
flowchart TB
subgraph Cluster["CNPG Cluster"]
Primary[Primary]
Replica1[Replica 1]
Replica2[Replica 2]
end
subgraph Backup["Backup"]
SeaweedFS[SeaweedFS]
end
Primary -->|"WAL Stream"| Replica1
Primary -->|"WAL Stream"| Replica2
Primary -->|"WAL Archive"| SeaweedFS
Multi-Region DR
flowchart TB
subgraph Region1["Region 1 (Primary)"]
PG1[CNPG Primary]
end
subgraph Region2["Region 2 (DR)"]
PG2[CNPG Standby]
end
subgraph Backup["Backup"]
SeaweedFS[SeaweedFS]
end
PG1 -->|"WAL Streaming"| PG2
PG1 -->|"WAL Archive"| SeaweedFS
PG2 -->|"WAL Restore"| SeaweedFS
Configuration
Cluster Definition
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: <org>-postgres
namespace: databases
spec:
instances: 3
postgresql:
parameters:
max_connections: "200"
shared_buffers: 256MB
storage:
size: 10Gi
storageClass: <storage-class>
backup:
barmanObjectStore:
destinationPath: s3://cnpg-backups/<org>
endpointURL: http://seaweedfs.storage.svc:8333
s3Credentials:
accessKeyId:
name: seaweedfs-credentials
key: access-key
secretAccessKey:
name: seaweedfs-credentials
key: secret-key
wal:
compression: gzip
retentionPolicy: "30d"
monitoring:
enablePodMonitor: true
DR Replica (Region 2)
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: <org>-postgres-dr
namespace: databases
spec:
instances: 1
replica:
enabled: true
source: <org>-postgres
externalClusters:
- name: <org>-postgres
connectionParameters:
host: postgres.<env>.<sovereign-domain>
user: streaming_replica
password:
name: pg-replica-credentials
key: password
Backup Strategy
| Type | Schedule | Retention |
|---|---|---|
| WAL Archive | Continuous | 7 days |
| Base Backup | Daily 2 AM | 30 days |
| Point-in-Time | On-demand | Per backup |
Scheduled Backup
apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
name: <org>-daily-backup
namespace: databases
spec:
schedule: "0 2 * * *"
backupOwnerReference: self
cluster:
name: <org>-postgres
Failover
Automatic (Within Region)
CNPG automatically promotes replicas when primary fails.
Manual (Cross-Region)
# Promote DR cluster
kubectl cnpg promote <org>-postgres-dr -n databases
Monitoring
| Metric | Description |
|---|---|
cnpg_pg_replication_lag |
Replication lag in seconds |
cnpg_pg_database_size_bytes |
Database size |
cnpg_pg_stat_activity_count |
Active connections |
PgBouncer Integration
Connection pooling with PgBouncer:
apiVersion: postgresql.cnpg.io/v1
kind: Pooler
metadata:
name: <org>-pooler
namespace: databases
spec:
cluster:
name: <org>-postgres
instances: 2
type: rw
pgbouncer:
poolMode: transaction
parameters:
max_client_conn: "1000"
default_pool_size: "20"
Part of OpenOva