Reworks bp-harbor to write blobs DIRECTLY to the cloud-provider's
native S3 endpoint (Hetzner Object Storage on Hetzner Sovereigns)
per ADR-0001 §13. Mirrors the post-#425 vendor-agnostic seam shipped
in bp-velero:1.2.0 (PR #435 / SHA 0172b9a8) 1:1.
Canonical seam used (per anti-duplication rule + docs/omantel-
handover-wbs.md §3a):
- Sealed Secret name: flux-system/object-storage (NOT hetzner-prefixed)
- Chart values block: .Values.objectStorage.s3.{enabled,credentialsSecretName,s3.{accessKey,secretKey}}
- Template filename: templates/objectstorage-credentials.yaml
- Reference impl: platform/velero/chart/ (PR #435)
Chart changes (platform/harbor/chart/):
- Chart.yaml: 1.0.0 → 1.1.0; description rewritten to emphasise
cloud-direct architecture + remove SeaweedFS hard-dep claim.
- values.yaml: REMOVED hardcoded SeaweedFS endpoint
(http://seaweedfs-s3.seaweedfs.svc.cluster.local:8333) from
persistence.imageChartStorage.s3.regionendpoint. Default
type flipped to `filesystem` so contabo/dev render is clean.
Added vendor-agnostic objectStorage block:
objectStorage:
enabled: false
useExistingSecret: false
credentialsSecretName: ""
s3: { accessKey: "", secretKey: "" }
- templates/objectstorage-credentials.yaml (NEW): synthesises a
harbor-namespace Secret with REGISTRY_STORAGE_S3_ACCESSKEY +
REGISTRY_STORAGE_S3_SECRETKEY keys (the upstream chart's
persistence.imageChartStorage.s3.existingSecret consumption
shape — envFrom on the registry pod). Skip-render branch
when objectStorage.enabled=false (default).
- templates/_helpers.tpl: added bp-harbor.objectStorageCredentialsSecretName
helper.
- templates/networkpolicy.yaml: egress rule retargeted from
SeaweedFS service-namespace selector → external HTTPS:443
(works for any cloud-native S3 endpoint without vendor coupling).
Gated on `.Values.objectStorage.enabled`. Removed
seaweedfsNamespace + seaweedfsS3Port overlay keys.
Per-Sovereign overlays (clusters/{_template,omantel,otech}/bootstrap-
kit/19-harbor.yaml):
- Chart version reference bumped 1.0.0 → 1.1.0.
- dependsOn: bp-seaweedfs REMOVED. New dependsOn = bp-cnpg + bp-cert-manager.
- Added valuesFrom block mapping the 5 keys of flux-system/object-
storage Secret:
s3-bucket → harbor.persistence.imageChartStorage.s3.bucket
s3-region → harbor.persistence.imageChartStorage.s3.region
s3-endpoint → harbor.persistence.imageChartStorage.s3.regionendpoint
s3-access-key → objectStorage.s3.accessKey
s3-secret-key → objectStorage.s3.secretKey
- Inline values flip objectStorage.enabled=true,
harbor.persistence.imageChartStorage.type=s3, and
harbor.persistence.imageChartStorage.s3.existingSecret=harbor-
objectstorage-credentials.
UI catalog (products/catalyst/bootstrap/ui/src/shared/constants/components.ts):
- Harbor's `dependencies` array drops `seaweedfs`. Now ['cnpg', 'valkey'].
Validation:
helm template default render →
1448 lines, 5 Secrets (Harbor internal: core/jobservice/registry/
registry-htpasswd/database — NO objectstorage-credentials),
type=filesystem, 0 SeaweedFS references.
helm template overlay render with objectStorage.enabled=true +
type=s3 + bucket=omantel-harbor + region=fsn1 +
regionendpoint=https://fsn1.your-objectstorage.com +
existingSecret=harbor-objectstorage-credentials →
1452 lines, 6 Secrets (5 internal + 1 objectstorage-credentials),
type=s3 with Hetzner endpoint, registry pod envFrom wired to the
new Secret, 0 SeaweedFS references.
scripts/check-vendor-coupling.sh → exit 0 (no violations across
platform/, clusters/, products/catalyst/bootstrap/{api,ui}/).
helm lint → 0 failures.
WBS:
§2 row 18 → 🟢 chart-released (#383).
§9 #383 row → 🟢 chart-released narrative.
§6 DAG: T383 moved from `class blocked` → `class done`.
Hetzner-S3 E2E deferred to Phase 8 (first omantel run).
Co-authored-by: hatiyildiz <hatiyildiz@noreply.github.com>
7.4 KiB
Harbor
Container registry with vulnerability scanning. Per-host-cluster infrastructure (see docs/PLATFORM-TECH-STACK.md §3.5) — every host cluster runs a Harbor instance for Catalyst component images, mirrored Blueprint OCI artifacts, and customer images.
Status: Accepted | Updated: 2026-04-27
Overview
Harbor is mandatory on every host cluster. Each host cluster runs its own Harbor instance that mirrors from upstream sources (ghcr.io/openova-io/... for Catalyst components and Blueprint OCI artifacts; the customer's own CI for application images). Local Harbor = fast Pod pulls, no cross-region traffic on every image pull, air-gap ready.
flowchart TB
subgraph Upstream["Upstream OCI sources"]
GHCR[ghcr.io/openova-io/* — Catalyst + Blueprints]
CustCI[Customer CI — Application images]
end
subgraph Cluster1["Host cluster A (e.g. hz-fsn-rtz-prod)"]
H1[Harbor — local mirror]
T1[Trivy Scanner]
Pods1[Pods pull locally]
end
subgraph Cluster2["Host cluster B (e.g. hz-hel-rtz-prod)"]
H2[Harbor — local mirror]
T2[Trivy Scanner]
Pods2[Pods pull locally]
end
GHCR -.->|"pull mirror"| H1
CustCI -.->|"push"| H1
GHCR -.->|"pull mirror"| H2
CustCI -.->|"push"| H2
H1 --> T1
H2 --> T2
H1 --> Pods1
H2 --> Pods2
Why Mandatory?
| Requirement | Harbor (per host cluster) | External Registry |
|---|---|---|
| Local pulls (no cross-region traffic) | ✅ Each cluster's Pods pull from local Harbor | ❌ Pods pull cross-region |
| Vulnerability scanning | ✅ Trivy integrated | ⚠️ Depends on provider |
| Air-gap support | ✅ Self-hosted | ❌ |
| RBAC | ✅ Full control | ⚠️ Provider-specific |
| Audit logging | ✅ Complete | ⚠️ Limited |
| No external dependency at runtime | ✅ Once mirrored | ❌ |
Features
| Feature | Support |
|---|---|
| Image storage | OCI-compliant |
| Vulnerability scanning | Trivy integration |
| Image signing | Cosign/Notary |
| Replication | Push/pull between regions |
| RBAC | Project-based access |
| Quotas | Per-project storage limits |
| Garbage collection | Automatic cleanup |
Per-host-cluster mirroring (NOT primary-replica)
Catalyst's agreed model is one Harbor per host cluster, each independently pulling from upstream OCI sources. There is no Harbor-to-Harbor replication primary/replica.
sequenceDiagram
participant CI as CI / Upstream OCI
participant H1 as Harbor (cluster A)
participant T1 as Trivy (cluster A)
participant H2 as Harbor (cluster B)
participant T2 as Trivy (cluster B)
participant Pods as Pods
CI->>H1: pull-mirror sync (configured per project)
H1->>T1: scan on ingest
CI->>H2: pull-mirror sync (independent of H1)
H2->>T2: scan on ingest
Pods->>H1: pull (cluster A Pods)
Pods->>H2: pull (cluster B Pods)
Why pull-mirror, not Harbor-to-Harbor replication:
- Single source of truth = upstream (
ghcr.io/openova-io/...or customer CI), not a "primary Harbor". - Each cluster is its own failure domain — primary-replica drift between Harbors would be one more thing to fail.
- Air-gap path is the same shape: a one-time mirror import vs ongoing primary-pushed replication.
Benefits:
- Images available locally in each cluster.
- Survives any cluster (including the management cluster) going down — workload clusters keep pulling locally.
- Faster pulls (no cross-region traffic per Pod start).
Storage Backend Options
| Backend | Use Case | Notes |
|---|---|---|
PVC (type: filesystem) |
Dev / contabo / single-node | Default render — no S3 wiring |
| Cloud-native S3 | Production Sovereigns | Hetzner Object Storage / AWS S3 / GCP / Azure |
Recommended: Cloud-native S3 (per ADR-0001 §13)
S3-aware apps (Harbor is one) write DIRECTLY to the cloud-provider's native S3 endpoint. SeaweedFS is reserved as a POSIX→S3 buffer for legacy POSIX-only writers and is NOT in the minimal Sovereign set.
flowchart LR
Harbor[Harbor] -->|"S3 API (HTTPS)"| Hetzner[Hetzner Object Storage<br/>fsn1.your-objectstorage.com]
Configuration
Helm Values (per-Sovereign overlay shape — issue #383 / #425)
gateway:
host: registry.<sovereign-fqdn>
# Vendor-agnostic Object Storage seam — populated via Flux valuesFrom
# against the canonical flux-system/object-storage Sealed Secret.
objectStorage:
enabled: true
credentialsSecretName: harbor-objectstorage-credentials
s3:
accessKey: "" # populated by Flux valuesFrom
secretKey: "" # populated by Flux valuesFrom
harbor:
persistence:
imageChartStorage:
type: s3
s3:
# bucket / region / regionendpoint also populated by Flux valuesFrom
existingSecret: harbor-objectstorage-credentials
v4auth: true
secure: true
trivy:
enabled: true
database:
type: internal # or external for CNPG
redis:
type: internal # or external for Valkey
core:
secretName: harbor-core-secret
Pull-mirror policy
{
"name": "ghcr-openova-mirror",
"src_registry": {
"type": "harbor",
"url": "https://ghcr.io",
"credential": {
"access_key": "",
"access_secret": ""
}
},
"trigger": {
"type": "scheduled",
"trigger_settings": {
"cron": "0 */6 * * *"
}
},
"filters": [
{
"type": "name",
"value": "openova-io/**"
}
],
"enabled": true
}
Security Scanning
Trivy Integration
| Scan Type | Trigger |
|---|---|
| On push | Automatic when image pushed |
| Scheduled | Daily full scan |
| Manual | On-demand via UI/API |
Scan Policy
| Severity | Action |
|---|---|
| Critical | Block pull |
| High | Allow (configurable) |
| Medium | Allow |
| Low | Allow |
Kyverno Policies
Require Harbor Images
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-harbor-images
spec:
validationFailureAction: Enforce
rules:
- name: require-harbor-registry
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Images must be pulled from Harbor registry"
pattern:
spec:
containers:
- image: "harbor.<location-code>.<sovereign-domain>/*"
Resource Requirements
| Component | CPU | Memory |
|---|---|---|
| Harbor Core | 0.5 | 512Mi |
| Registry | 0.5 | 512Mi |
| Database | 0.5 | 512Mi |
| Redis | 0.25 | 256Mi |
| Trivy | 0.5 | 1Gi |
| Total | 2.25 | 2.75Gi |
Backup Strategy
Harbor data backed up via Velero to Archival S3:
flowchart LR
Harbor[Harbor] --> Velero[Velero]
Velero --> S3[Archival S3]
Backed up:
- Database (PostgreSQL)
- Registry storage (blobs)
- Configuration
Consequences
Positive:
- Complete control over image lifecycle.
- Built-in vulnerability scanning (Trivy on ingest).
- Per-cluster mirror = no cross-region pull traffic; each cluster is an independent failure domain.
- Air-gap ready (one-time import works the same way as ongoing pull-mirror).
- Audit trail for compliance.
Negative:
- Resource overhead (~3GB RAM)
- Operational responsibility
- Backup requirements (handled by Velero)
Part of OpenOva