Started as gitea + relay atomic check. The gitea fix surfaced surviving
<domain> placeholders across 8 other component READMEs that prior sweeps
(Pass 29: canonical docs, Pass 32: image registries) hadn't covered.
Catalyst control-plane DNS fixes (-> {component}.<location-code>.<sovereign-domain>):
- gitea: GITEA_INSTANCE_URL.
- external-secrets: openbao ClusterSecretStore + gitea Flux GitRepository.
Application DNS fixes (-> {app}.<env>.<sovereign-domain>):
- temporal: had two drift items in one line — temporal.fuse.<domain>
(old "fuse" product name + wrong placeholder shape). Pass 32 fixed
the image ref on the same file but missed this. Now fully de-drifted.
- valkey: --replicaof valkey.region1.<domain> (non-canonical region1
segment — Catalyst encodes regions in location-code).
- strimzi: kafka-kafka-bootstrap.region1.<domain>:9092 — same.
- cnpg: postgres.region1.<domain> cross-region replica host — same.
- stunner: STUN/TURN realm — kept canonical Application form for
consistency even though STUN realms are nominally opaque.
- k8gb: Gslb ingress host app.gslb.<domain> -> app.gslb.<sovereign-domain>.
Other illustrative k8gb refs (dnsZone, nslookup examples) preserved
as they describe behavior generically.
products/relay/README.md: clean.
Preserved as correctly-generic: external-dns illustrative refs,
cert-manager <domain> (customer-supplied cert names), stalwart <domain>
(customer email-receiving domain).
Validation log Pass 35 entry: third end-to-end DNS sweep iteration
(29 -> 32 -> 35). Future passes should grep for bare <domain> early to
catch new instances introduced during edits.
|
||
|---|---|---|
| .. | ||
| README.md | ||
CNPG (CloudNative PostgreSQL)
Production-grade PostgreSQL operator. Application Blueprint (see docs/PLATFORM-TECH-STACK.md §4.1 — Data services). Used by Organizations that want managed Postgres; also the underlying engine for FerretDB (MongoDB-compatible) and Gitea metadata. Replication via WAL streaming to async standby (Application-tier choice).
Status: Accepted | Updated: 2026-04-27
Overview
CloudNative PostgreSQL (CNPG) provides production-grade PostgreSQL with:
- Kubernetes-native operator
- WAL streaming for multi-region DR
- Automated backups to MinIO/S3
- High availability with automatic failover
Architecture
Single Region
flowchart TB
subgraph Cluster["CNPG Cluster"]
Primary[Primary]
Replica1[Replica 1]
Replica2[Replica 2]
end
subgraph Backup["Backup"]
MinIO[MinIO]
end
Primary -->|"WAL Stream"| Replica1
Primary -->|"WAL Stream"| Replica2
Primary -->|"WAL Archive"| MinIO
Multi-Region DR
flowchart TB
subgraph Region1["Region 1 (Primary)"]
PG1[CNPG Primary]
end
subgraph Region2["Region 2 (DR)"]
PG2[CNPG Standby]
end
subgraph Backup["Backup"]
MinIO[MinIO]
end
PG1 -->|"WAL Streaming"| PG2
PG1 -->|"WAL Archive"| MinIO
PG2 -->|"WAL Restore"| MinIO
Configuration
Cluster Definition
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: <org>-postgres
namespace: databases
spec:
instances: 3
postgresql:
parameters:
max_connections: "200"
shared_buffers: 256MB
storage:
size: 10Gi
storageClass: <storage-class>
backup:
barmanObjectStore:
destinationPath: s3://cnpg-backups/<org>
endpointURL: http://minio.storage.svc:9000
s3Credentials:
accessKeyId:
name: minio-credentials
key: access-key
secretAccessKey:
name: minio-credentials
key: secret-key
wal:
compression: gzip
retentionPolicy: "30d"
monitoring:
enablePodMonitor: true
DR Replica (Region 2)
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: <org>-postgres-dr
namespace: databases
spec:
instances: 1
replica:
enabled: true
source: <org>-postgres
externalClusters:
- name: <org>-postgres
connectionParameters:
host: postgres.<env>.<sovereign-domain>
user: streaming_replica
password:
name: pg-replica-credentials
key: password
Backup Strategy
| Type | Schedule | Retention |
|---|---|---|
| WAL Archive | Continuous | 7 days |
| Base Backup | Daily 2 AM | 30 days |
| Point-in-Time | On-demand | Per backup |
Scheduled Backup
apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
name: <org>-daily-backup
namespace: databases
spec:
schedule: "0 2 * * *"
backupOwnerReference: self
cluster:
name: <org>-postgres
Failover
Automatic (Within Region)
CNPG automatically promotes replicas when primary fails.
Manual (Cross-Region)
# Promote DR cluster
kubectl cnpg promote <org>-postgres-dr -n databases
Monitoring
| Metric | Description |
|---|---|
cnpg_pg_replication_lag |
Replication lag in seconds |
cnpg_pg_database_size_bytes |
Database size |
cnpg_pg_stat_activity_count |
Active connections |
PgBouncer Integration
Connection pooling with PgBouncer:
apiVersion: postgresql.cnpg.io/v1
kind: Pooler
metadata:
name: <org>-pooler
namespace: databases
spec:
cluster:
name: <org>-postgres
instances: 2
type: rw
pgbouncer:
poolMode: transaction
parameters:
max_client_conn: "1000"
default_pool_size: "20"
Part of OpenOva