openova/clusters/_template/bootstrap-kit/19-harbor.yaml
e3mrah 74d23ab3dc
fix(charts): explicit harbor.openova.io/proxy-dockerhub prefix on all chart-hook images (#163) (#1367)
Per CLAUDE.md MIRROR-EVERYTHING inviolable rule: every chart-hook
image reference (pre/post-install Jobs, helper Pods) must use the
explicit Harbor proxy-cache form. Fix #158's bitnami → bitnamilegacy
swap was a band-aid; the architecturally correct fix is to defeat
upstream-deletion blast radius entirely by routing through Harbor.

The node-level containerd mirror in infra/hetzner/cloudinit-control-
plane.tftpl (line 706) already redirects docker.io/* →
harbor.openova.io/proxy-dockerhub/* implicitly, but implicit routing:
  - Hides the routing from SBOM scans
  - Bypasses the Kyverno harbor-proxy-pull ClusterPolicy
  - Means a chart audit (`grep docker.io`) misses a real dependency
  - Was the proximate cause of prov #27 wedging when Bitnami deleted
    docker.io/bitnami/kubectl:1.30.4 (Fix #158 had to chase the
    deletion mid-flight instead of being insulated by Harbor cache)

19 chart-hook image: refs + 5 chart values.yaml repository: defaults
now carry the explicit harbor.openova.io/proxy-dockerhub prefix.
Application/subchart images (keycloak, postgresql, mongodb in
keycloak+litmus subcharts) are intentionally out of scope for this
PR — those go through the node-level containerd mirror still.

Affected blueprints + chart version bumps:
  bp-cert-manager            1.2.1  -> 1.2.2
  bp-external-secrets-stores 1.0.4  -> 1.0.5
  bp-crossplane-claims       1.1.4  -> 1.1.5
  bp-flux                    1.2.1  -> 1.2.2
  bp-guacamole               0.1.16 -> 0.1.17
  bp-self-sovereign-cutover  0.1.28 -> 0.1.29
  bp-k8s-ws-proxy            0.1.9  -> 0.1.10
  bp-harbor                  1.2.15 -> 1.2.16
  bp-gitea                   1.2.5  -> 1.2.6
  bp-newapi                  1.4.5  -> 1.4.6
  bp-wordpress-tenant        0.2.0  -> 0.2.1
  catalyst-platform          1.4.138 -> 1.4.139

Co-authored-by: e3mrah <1234567+e3mrah@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 11:32:21 +04:00

179 lines
7.1 KiB
YAML

# bp-harbor — Catalyst bootstrap-kit Blueprint, W2.K1 slot 19.
# Per-Sovereign OCI registry. Mirrors blueprint chart artifacts and
# container images so the Sovereign isn't dependent on ghcr.io for
# day-2 image pulls; also hosts Org-private images per Application.
#
# Per ADR-0001 §13 (S3-aware app rule) + docs/omantel-handover-wbs.md
# §3 + §3a, on Hetzner Sovereigns Harbor writes its blob backend
# DIRECTLY to Hetzner Object Storage — NOT SeaweedFS, which is
# reserved as a POSIX→S3 buffer for legacy POSIX-only writers and is
# not in the minimal Sovereign set.
#
# Wrapper chart: platform/harbor/chart/ (umbrella over upstream
# goharbor/harbor chart, Catalyst-curated values under the `harbor:`
# key + a vendor-AGNOSTIC `objectStorage.s3.*` section that ships the
# harbor-namespace credentials Secret in
# REGISTRY_STORAGE_S3_{ACCESSKEY,SECRETKEY} envFrom shape).
# Reconciled by: Flux on the new Sovereign's k3s control plane.
#
# Object Storage credential pattern (issue #371, vendor-agnostic since
# #425, applied to bp-harbor in #383):
# - cloud-init writes flux-system/object-storage Secret with 5 keys:
# s3-endpoint / s3-region / s3-bucket / s3-access-key /
# s3-secret-key (operator-issued in the Hetzner Console; Hetzner
# exposes no Cloud API to mint S3 credentials. Future AWS / Azure /
# GCP / OCI Sovereigns provision the same Secret name + same keys
# via their respective `infra/<provider>/` Tofu modules — the seam
# is vendor-agnostic by name).
# - This HelmRelease references that Secret via Flux `valuesFrom`,
# pulling each key into the appropriate Helm value path. The
# umbrella chart's templates/objectstorage-credentials.yaml then
# synthesises a harbor-namespace Secret with
# REGISTRY_STORAGE_S3_ACCESSKEY / REGISTRY_STORAGE_S3_SECRETKEY
# keys, referenced via persistence.imageChartStorage.s3.existingSecret.
#
# dependsOn: bp-cnpg + bp-cert-manager. The earlier dependency on
# bp-seaweedfs is REMOVED in 1.1.0 (cloud-direct architecture rule;
# SeaweedFS is no longer a Harbor prerequisite on Sovereigns).
#
# Per docs/BOOTSTRAP-KIT-EXPANSION-PLAN.md §6.7 — Harbor sits in the
# storage cohort (W2.K1) rather than apps cohort because it is a
# consumer of CNPG (registry metadata DB), and its presence gates
# Cosign signing in bp-sigstore (slot 32) and image pinning across
# all later HRs.
---
apiVersion: v1
kind: Namespace
metadata:
name: harbor
labels:
catalyst.openova.io/sovereign: ${SOVEREIGN_FQDN}
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: bp-harbor
namespace: flux-system
spec:
type: oci
interval: 15m
url: oci://ghcr.io/openova-io
secretRef:
name: ghcr-pull
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: bp-harbor
namespace: flux-system
spec:
interval: 15m
releaseName: harbor
targetNamespace: harbor
# Harbor depends on:
# - bp-cnpg(16): registry metadata DB (postgresql.cnpg.io/v1.Cluster).
# - bp-cert-manager(02): registry endpoint TLS via ClusterIssuer.
# bp-seaweedfs dependency REMOVED per ADR-0001 §13 (cloud-direct).
dependsOn:
- name: bp-cnpg
- name: bp-cert-manager
# bp-gateway-api (issue #503): chart ships an HTTPRoute template;
# gateway.networking.k8s.io/v1 CRDs must be registered first.
- name: bp-gateway-api
chart:
spec:
chart: bp-harbor
# 1.2.15: hot-fix for issue #949 — admin-secret.yaml duplicate
# label keys (app.kubernetes.io/name, catalyst.openova.io/
# component) made Helm's strict YAML post-render reject the
# rendered manifest, blocking the upgrade chain on otech113.
# Labels in admin-secret.yaml are now inlined verbatim instead
# of `include "bp-harbor.labels"` + override, eliminating the
# collision.
# 1.2.14: Catalyst-curated `harbor-admin` Secret with Reflector
# mirror annotations into `catalyst` ns so the
# bp-self-sovereign-cutover Step 02 (harbor-projects) Job in
# `catalyst` can read HARBOR_ADMIN_PASSWORD via secretKeyRef
# without the cross-namespace forbiddance K8s enforces. Caught
# live on otech113 2026-05-05 (issue #935 Bug 1) — Step 02 was
# in CreateContainerConfigError for 11+ retries, blocking
# cutover indefinitely.
version: 1.2.16
sourceRef:
kind: HelmRepository
name: bp-harbor
namespace: flux-system
# Event-driven install per docs/INVIOLABLE-PRINCIPLES.md #3.
# timeout: 15m — Harbor's post-install hooks (DB migration, job-service
# init) legitimately need >5m on cold k3s. Same canonical-seam pattern
# as Fix #127 (cutover), Fix #131 (gitea), Fix #143 (es-stores):
# explicit HR-level timeout overrides Helm's 5m default which expires
# before Harbor reaches Ready (prov #24 c776423270f4ae30 04:17 incident).
install:
timeout: 15m
disableWait: true
remediation:
retries: 3
upgrade:
timeout: 15m
disableWait: true
remediation:
retries: 3
# ── Vendor-agnostic Object Storage backend wiring (issue #383 / #425) ──
#
# Each entry below pulls a single key from the canonical
# flux-system/object-storage Secret (shipped by cloud-init in
# infra/<provider>/cloudinit-control-plane.tftpl) into the matching
# value path in the umbrella chart. Flux dereferences `valuesFrom` at
# HelmRelease apply time, so plaintext credentials never appear in
# this committed manifest.
#
# NOTE: targetPath uses dot notation; keys are required by default
# (`optional: false` is the implicit default).
valuesFrom:
- kind: Secret
name: object-storage
valuesKey: s3-bucket
targetPath: harbor.persistence.imageChartStorage.s3.bucket
- kind: Secret
name: object-storage
valuesKey: s3-region
targetPath: harbor.persistence.imageChartStorage.s3.region
- kind: Secret
name: object-storage
valuesKey: s3-endpoint
targetPath: harbor.persistence.imageChartStorage.s3.regionendpoint
- kind: Secret
name: object-storage
valuesKey: s3-access-key
targetPath: objectStorage.s3.accessKey
- kind: Secret
name: object-storage
valuesKey: s3-secret-key
targetPath: objectStorage.s3.secretKey
# Per-Sovereign overrides — issue #387 + #383:
# - gateway.host wires the per-Sovereign hostname into the HTTPRoute.
# - objectStorage.enabled: true engages the cloud-direct S3 backend
# (Hetzner Object Storage on Hetzner Sovereigns).
# - harbor.persistence.imageChartStorage.type: s3 flips upstream chart
# off the default filesystem mode.
# - harbor.persistence.imageChartStorage.s3.existingSecret matches the
# credentials Secret name templated by the umbrella chart.
values:
gateway:
host: registry.${SOVEREIGN_FQDN}
objectStorage:
enabled: true
useExistingSecret: false
credentialsSecretName: harbor-objectstorage-credentials
harbor:
persistence:
imageChartStorage:
type: s3
s3:
existingSecret: harbor-objectstorage-credentials
v4auth: true
secure: true
storageclass: STANDARD