openova/platform/cert-manager/chart/values.yaml
e3mrah 74d23ab3dc
fix(charts): explicit harbor.openova.io/proxy-dockerhub prefix on all chart-hook images (#163) (#1367)
Per CLAUDE.md MIRROR-EVERYTHING inviolable rule: every chart-hook
image reference (pre/post-install Jobs, helper Pods) must use the
explicit Harbor proxy-cache form. Fix #158's bitnami → bitnamilegacy
swap was a band-aid; the architecturally correct fix is to defeat
upstream-deletion blast radius entirely by routing through Harbor.

The node-level containerd mirror in infra/hetzner/cloudinit-control-
plane.tftpl (line 706) already redirects docker.io/* →
harbor.openova.io/proxy-dockerhub/* implicitly, but implicit routing:
  - Hides the routing from SBOM scans
  - Bypasses the Kyverno harbor-proxy-pull ClusterPolicy
  - Means a chart audit (`grep docker.io`) misses a real dependency
  - Was the proximate cause of prov #27 wedging when Bitnami deleted
    docker.io/bitnami/kubectl:1.30.4 (Fix #158 had to chase the
    deletion mid-flight instead of being insulated by Harbor cache)

19 chart-hook image: refs + 5 chart values.yaml repository: defaults
now carry the explicit harbor.openova.io/proxy-dockerhub prefix.
Application/subchart images (keycloak, postgresql, mongodb in
keycloak+litmus subcharts) are intentionally out of scope for this
PR — those go through the node-level containerd mirror still.

Affected blueprints + chart version bumps:
  bp-cert-manager            1.2.1  -> 1.2.2
  bp-external-secrets-stores 1.0.4  -> 1.0.5
  bp-crossplane-claims       1.1.4  -> 1.1.5
  bp-flux                    1.2.1  -> 1.2.2
  bp-guacamole               0.1.16 -> 0.1.17
  bp-self-sovereign-cutover  0.1.28 -> 0.1.29
  bp-k8s-ws-proxy            0.1.9  -> 0.1.10
  bp-harbor                  1.2.15 -> 1.2.16
  bp-gitea                   1.2.5  -> 1.2.6
  bp-newapi                  1.4.5  -> 1.4.6
  bp-wordpress-tenant        0.2.0  -> 0.2.1
  catalyst-platform          1.4.138 -> 1.4.139

Co-authored-by: e3mrah <1234567+e3mrah@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 11:32:21 +04:00

175 lines
8.9 KiB
YAML

# Catalyst Blueprint umbrella metadata — the upstream chart is now resolved
# as a Helm subchart via Chart.yaml `dependencies:`. This values.yaml carries
# both:
# 1. The catalystBlueprint metadata block (provenance + version) so
# observability/audit pipelines can inspect the artifact and report
# which upstream chart + version is bundled.
# 2. The upstream subchart values overlay under the `cert-manager:` key
# (umbrella-chart convention — the dependency name from Chart.yaml is
# the values namespace).
# 3. Catalyst overlay values consumed by templates/ (e.g. `certManager:`
# governs templates/clusterissuer-letsencrypt-dns01.yaml).
#
# Per docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode), every operationally-
# meaningful value is configurable; cluster overlays in clusters/<sovereign>/
# may override any of these without rebuilding the Blueprint OCI artifact.
global:
# When set, ALL image pulls in this chart route through this registry.
# Used post-handover when the Sovereign's own Harbor takes over the
# proxy_cache role from contabo's central Harbor. Empty = no rewrite
# (image references use upstream defaults). The upstream cert-manager
# subchart exposes per-component image.registry knobs:
# cert-manager.image.registry, cert-manager.webhook.image.registry,
# cert-manager.cainjector.image.registry, cert-manager.startupapicheck.image.registry
# Per-Sovereign overlays should populate those alongside this value. Tracked under #560.
imageRegistry: ""
catalystBlueprint:
upstream: { chart: cert-manager, version: "v1.16.2", repo: "https://charts.jetstack.io" }
# ─── Upstream chart values (subchart key: cert-manager) ───────────────────
# `helm dependency build` resolves the upstream as a subchart; values here
# under the `cert-manager:` key flow into that subchart unchanged.
cert-manager:
# Install CRDs as part of this chart. The Catalyst overlay's ClusterIssuer
# template (templates/clusterissuer-letsencrypt-dns01.yaml) depends on the
# cert-manager.io/v1 CRD being registered before the post-install hook
# runs. The legacy `installCRDs:` flag is replaced by `crds.enabled` /
# `crds.keep` in cert-manager v1.16+ — the two cannot both be set.
crds:
enabled: true
keep: true
# Prometheus scraping + ServiceMonitor — DEFAULT OFF.
#
# Per docs/INVIOLABLE-PRINCIPLES.md #4 and docs/BLUEPRINT-AUTHORING.md
# §11.2 (Observability toggles must default false): the
# `monitoring.coreos.com/v1` CRDs that back ServiceMonitor ship with
# kube-prometheus-stack — an Application Blueprint that depends on the
# bootstrap-kit. Defaulting `servicemonitor.enabled: true` creates a
# circular CRD dependency that breaks bp-cert-manager install on a
# fresh Sovereign. Operator opts in via per-cluster overlay after the
# observability tier is reconciled (issue #182).
prometheus:
enabled: false
servicemonitor:
enabled: false
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
memory: 256Mi
webhook:
resources:
requests:
cpu: 50m
memory: 64Mi
# ─── Catalyst-managed ClusterIssuers (templates/clusterissuer-letsencrypt-dns01.yaml) ──
# The Catalyst-curated wrapper ships TWO ClusterIssuers and lets the
# operator pick the active one via these values:
#
# - letsencrypt-dns01-prod (DEPRECATED: was dynadot-webhook-backed.
# DEFAULT DISABLED. The DNS-01 wildcard issuer
# for Sovereigns is now `letsencrypt-dns01-prod-
# powerdns`, shipped by bp-cert-manager-powerdns-
# webhook against contabo's central PowerDNS —
# Dynadot is NOT the API-level authority for
# omani.works subdomains. Cluster overlays MAY
# flip this back on if they ship their own
# dynadot-webhook for a non-omani.works pool.)
# - letsencrypt-http01-prod (INTERIM: explicit hostnames only, no
# wildcards, works today via Cilium ingress)
#
# Per docs/INVIOLABLE-PRINCIPLES.md #4 ("never hardcode") all knobs are
# runtime-configurable; cluster overlays in clusters/<sovereign>/ may set
# certManager.issuers.* to flip between issuers without rebuilding the
# Blueprint OCI artifact.
# ─── CRD-establishment gate (templates/crd-gate-{hook,rbac}.yaml) ────────
# Closes #149 — bp-cert-manager terminal failure on prov #24
# (`c776423270f4ae30`): the post-install ClusterIssuer hook (weight 5)
# fired before the cert-manager.io ClusterIssuer CRD reached
# `status.conditions[?(@.type=="Established")].status == "True"`. The
# upstream Jetstack subchart installs the CRD as a regular template
# (no helm.sh/hook), so `kubectl apply` returns when the resource is
# CREATED — not when the apiextensions-apiserver controller has finished
# Establishing it. Asynchronous in the apiserver; observed up to 30s on
# fresh Hetzner cold-start k3s.
#
# This Job (post-install,post-upgrade hook-weight -10) polls every CRD
# in `crds` for Established=True before the ClusterIssuer hook (weight
# 5) fires. Per docs/INVIOLABLE-PRINCIPLES.md #4 (no hardcoded band-
# aids, target-state every time): closes the race rather than papering
# with `helm.sh/hook-weight: 50` or a longer Flux retry loop.
crdGate:
enabled: true
# CRDs the gate waits for. Defaults cover the cert-manager.io types
# the Catalyst overlay templates instantiate (ClusterIssuer ships in
# this chart; Issuer + Certificate are consumed by dependent
# Blueprints — gating them here too means downstream HRs that
# immediately apply Certificate CRs don't have to ship their own
# gate). Cluster overlays MAY append CRDs here (e.g. when a custom
# webhook ships its own CRD that should also be Established before
# any dependent CR applies).
crds:
- clusterissuers.cert-manager.io
- issuers.cert-manager.io
- certificates.cert-manager.io
# Total wait budget. 300s gives ~10x headroom over the worst-case
# observed Established latency (~30s on a fresh Hetzner cold-start)
# while still failing fast on a genuinely broken upstream (5min vs
# an unbounded Helm timeout). Same sizing rationale as
# bp-external-secrets-stores webhookGate.timeoutSeconds (#143).
timeoutSeconds: 300
# Poll interval. 2s matches bp-external-secrets-stores; <1s would
# spam the apiserver, >5s would over-pad the success path.
intervalSeconds: 2
# kubectl image. Default is a known-good kubectl 1.30 build that ships
# bash + kubectl (matches k3s 1.30 on Hetzner Sovereigns). Cluster
# overlays MAY pin to a digest for air-gap or supply-chain reasons.
#
# 2026-05-11 (Fix #158): switched from docker.io/bitnami/kubectl:1.30.4
# because Bitnami's 2025-08 secure-images cutover deleted all
# versioned tags from docker.io/bitnami/kubectl (only :latest +
# sha256-named tags remain). Pin to docker.io/bitnamilegacy/kubectl
# (Bitnami's deprecation-fallback registry path) which still carries
# versioned tags AND retains bash/sh in the image (rancher/kubectl
# is distroless and would break the hook's bash-c shell script —
# see platform/k8s-ws-proxy hmac-bootstrap-job.yaml comment).
# 1.30.7 is the newest 1.30.x in bitnamilegacy.
#
# Fix #163 (2026-05-11, MIRROR-EVERYTHING): explicit Harbor proxy-cache
# prefix per CLAUDE.md inviolable rule. Node-level containerd mirror in
# cloudinit-control-plane.tftpl line 706 already rewrites docker.io →
# harbor.openova.io/proxy-dockerhub, but explicit references defeat
# upstream-deletion blast radius AND satisfy the Kyverno
# `harbor-proxy-pull` ClusterPolicy.
image: harbor.openova.io/proxy-dockerhub/bitnamilegacy/kubectl:1.30.7
imagePullPolicy: IfNotPresent
certManager:
issuers:
# ACME account email used for renewal notifications. Per Sovereign
# convention this is ops@<pool-domain>.
email: ops@openova.io
# Production Let's Encrypt directory. Set to the staging URL during
# bring-up to avoid Let's Encrypt rate limits:
# https://acme-staging-v02.api.letsencrypt.org/directory
acmeServer: https://acme-v02.api.letsencrypt.org/directory
dns01:
# DEFAULT DISABLED — the dynadot-webhook-backed letsencrypt-dns01-prod
# is deprecated for omani.works Sovereigns. The replacement issuer
# `letsencrypt-dns01-prod-powerdns` is shipped by
# bp-cert-manager-powerdns-webhook (bootstrap-kit slot 49) and writes
# ACME challenge TXT records to contabo's central PowerDNS at
# https://pdns.openova.io. Cluster overlays MAY flip this back to
# true if they ship a custom dynadot-webhook for a non-omani.works
# pool where Dynadot IS the API-level authority.
enabled: false
webhookGroupName: acme.dynadot.openova.io
webhookSolverName: dynadot
http01:
enabled: true
ingressClassName: cilium