Component-level architectural correction (two changes): 1. MinIO → SeaweedFS as unified S3 encapsulation layer The old design used MinIO for in-cluster S3 plus separate cold-tier configuration scattered across consumers. The new design positions SeaweedFS as the single S3 encapsulation layer: every Catalyst component talks to one endpoint (seaweedfs.storage.svc:8333). SeaweedFS internally handles hot tier (in-cluster NVMe), warm tier (in-cluster bulk), and cold tier (transparent passthrough to cloud archival storage — Cloudflare R2 / AWS S3 / Hetzner Object Storage / etc., chosen at Sovereign provisioning). One audit/lifecycle/encryption boundary instead of N. No Catalyst component talks to cloud S3 directly anymore — Velero, CNPG WAL archive, OpenSearch snapshots, Loki/Mimir/Tempo, Iceberg, Harbor blob store, Application buckets all share one S3 surface. 2. Apache Guacamole added as Application Blueprint §4.5 Communication Clientless browser-based RDP/VNC/SSH/kubectl-exec gateway. Keycloak SSO, full session recording to SeaweedFS for compliance evidence (PSD2/DORA/SOX). Composed into bp-relay. Replaces VPN+native-client distribution for auditable remote access. Component changes: - DELETED: platform/minio/ - CREATED: platform/seaweedfs/README.md (unified S3 + cold-tier encapsulation; bucket layout; multi-region replication via shared cold backend; migration-from-MinIO section) - CREATED: platform/guacamole/README.md (clientless remote-desktop gateway; GuacamoleConnection CRD; compliance integration via session recordings) Doc updates: PLATFORM-TECH-STACK §1+§3.5+§4.5+§5+§7.4; TECHNOLOGY-FORECAST L11+mandatory+a-la-carte counts (52 → 53); ARCHITECTURE §3 topology; SECURITY §4 DB engines; SOVEREIGN-PROVISIONING §1 inputs; SRE §2.5+§7; IMPLEMENTATION-STATUS §3; BLUEPRINT-AUTHORING stateful examples; BUSINESS-STRATEGY 13 component-count anchors + Relay product line; README.md backup row; CLAUDE.md folder count. Component README updates (S3 endpoint + dependency renames): cnpg, clickhouse, flink, gitea, iceberg, harbor, grafana, livekit, kserve, milvus, opensearch, flux, stalwart, velero (substantive rewrite of velero — now writes exclusively to SeaweedFS with cold-tier auto-routing). Products: relay, fabric. UI scaffold: products/catalyst/bootstrap/ui/src/shared/constants/components.ts — minio entry replaced with seaweedfs; velero+harbor deps updated; new guacamole entry added. VALIDATION-LOG entry "Pass 104 — MinIO → SeaweedFS swap + Guacamole add" captures the encapsulation principle and adds Lesson #22: storage tier policy belongs at the encapsulation boundary, not inside every consumer. Verification: zero remaining MinIO references in canonical docs (one intentional retention in TECHNOLOGY-FORECAST L37 explaining the swap); 53 platform/ folders matching all "53 components" anchors; bp-relay composition includes guacamole.
4.7 KiB
Gitea
Per-Sovereign Git server for Catalyst. Hosts the public Blueprint catalog mirror, Org-private Blueprints, and per-Environment Gitea repos.
Status: Accepted | Updated: 2026-04-27
Catalyst role: Per-Sovereign supporting service in the Catalyst control plane (one Gitea per Sovereign on the management cluster). See
docs/PLATFORM-TECH-STACK.md§2.3 anddocs/ARCHITECTURE.md§3.
Overview
Gitea provides self-hosted Git with CI/CD capabilities:
- Internal Git repository hosting (per-Sovereign).
- Gitea Actions (GitHub Actions compatible).
- HA via intra-cluster replicas (not cross-region mirror — see Multi-Region section below).
- CNPG PostgreSQL backend.
Architecture
flowchart TB
subgraph Gitea["Gitea"]
Web[Web UI]
Git[Git Server]
Actions[Gitea Actions]
end
subgraph Backend["Backend"]
CNPG[CNPG Postgres]
SeaweedFS[SeaweedFS Storage]
end
subgraph Integrations
Flux[Flux CD]
Console[Catalyst console]
end
Web --> CNPG
Git --> CNPG
Actions --> SeaweedFS
Flux -->|"Clone"| Git
Console -->|"Discover"| Git
Multi-Region Strategy
Catalyst runs one Gitea per Sovereign on the management cluster. Cross-region resilience comes from intra-cluster HA (multiple replicas + CNPG primary-replica), not cross-region bidirectional mirror.
flowchart TB
subgraph Mgt["Management cluster (per Sovereign)"]
G[Gitea — N replicas, HA]
PG[CNPG primary]
PGR[CNPG read-replica]
G --> PG
PG -.->|"WAL streaming"| PGR
end
subgraph Region1["Workload region 1"]
F1[Per-vcluster Flux]
end
subgraph Region2["Workload region 2"]
F2[Per-vcluster Flux]
end
G --> F1
G --> F2
Why not cross-region bidirectional mirror?
- Single source of truth simplifies the merge story (the Sovereign-wide Catalyst console writes once, all Flux instances pull from one place).
- Bidirectional mirror would create write-conflict semantics that complicate EnvironmentPolicy enforcement (which requires PR approvals to be authoritative on the destination repo).
- Workload region failures don't affect Gitea — Flux is read-mostly during outages and the management cluster is the primary failure domain to harden.
If the Sovereign needs Gitea continuity across a full management-cluster failure, the relevant pattern is a DR replica of the management cluster — not Gitea mirroring inside one Sovereign.
Configuration
Gitea Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitea
namespace: gitea
spec:
replicas: 1
template:
spec:
containers:
- name: gitea
image: gitea/gitea:1.21
env:
- name: GITEA__database__DB_TYPE
value: postgres
- name: GITEA__database__HOST
value: gitea-postgres-rw.databases.svc:5432
- name: GITEA__storage__STORAGE_TYPE
value: seaweedfs
- name: GITEA__storage__SEAWEEDFS_ENDPOINT
value: seaweedfs.storage.svc:8333
Mirror Configuration
# app.ini
[mirror]
ENABLED = true
DISABLE_NEW_PULL = false
DISABLE_NEW_PUSH = false
DEFAULT_INTERVAL = 1m
Gitea Actions
GitHub Actions compatible CI/CD:
# .gitea/workflows/ci.yaml
name: CI
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build
run: make build
- name: Test
run: make test
Actions Runner
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitea-runner
namespace: gitea
spec:
replicas: 2
template:
spec:
containers:
- name: runner
image: gitea/act_runner:latest
env:
- name: GITEA_INSTANCE_URL
value: https://gitea.<location-code>.<sovereign-domain>
- name: GITEA_RUNNER_REGISTRATION_TOKEN
valueFrom:
secretKeyRef:
name: gitea-runner-token
key: token
Integration Points
| Integration | Purpose |
|---|---|
| Flux CD | GitOps source repository |
| Catalyst console | Repository discovery, templates |
| External Secrets | Token management |
| CNPG | PostgreSQL database |
| SeaweedFS | LFS and Actions storage |
Backup
Gitea data is backed up via:
- CNPG for PostgreSQL (WAL streaming to async standby; backed up via Velero to SeaweedFS + cloud archival).
- SeaweedFS replication for LFS/Actions storage.
- Velero scheduled backups of the gitea namespace.
Part of OpenOva