Compare commits

...

8 Commits

Author SHA1 Message Date
hatiyildiz
39f4eb6c51 feat(bp-velero): umbrella chart for observability stack
Catalyst Blueprint umbrella for Velero — Kubernetes-native backup and
disaster recovery. Per platform/velero/README.md, ALL Velero output
goes to SeaweedFS (Catalyst's unified S3 encapsulation), which
transitions to a cloud archival backend on the cold tier.

Pinned to vmware-tanzu/velero 12.0.1 (appVersion 1.18.0) on 2026-04-29.
Bundled velero-plugin-for-aws:v1.14.0 init container so SeaweedFS S3 is
reachable. backupsEnabled/snapshotsEnabled defaulted false at this
layer (placeholders for backupStorageLocation); per-Sovereign overlays
flip on after wiring SeaweedFS endpoint + credentials. ServiceMonitor +
PodMonitor + PrometheusRule default false per BLUEPRINT-AUTHORING.md
§11.2.

Part of issue #204 observability-stack umbrellas batch.
2026-04-29 22:06:46 +02:00
hatiyildiz
15d7b66edf feat(bp-langfuse): umbrella chart for observability stack
Catalyst Blueprint umbrella for Langfuse — LLM observability platform.
Complements bp-grafana (infrastructure metrics) with AI-specific
telemetry (traces, evaluations, prompts, cost attribution).

Pinned to langfuse/langfuse 1.5.28 (appVersion 3.171.0) on 2026-04-29.

Catalyst convention: ALL bundled Bitnami subcharts are disabled —
PostgreSQL via cnpg.io/Cluster (bp-cnpg), Redis via bp-valkey,
ClickHouse via bp-clickhouse, S3 via bp-seaweedfs. Per-Sovereign
overlays wire external endpoints + Secret references. Telemetry to
Langfuse Inc. defaulted false; signUpDisabled defaulted true.

Part of issue #204 observability-stack umbrellas batch.
2026-04-29 22:04:53 +02:00
hatiyildiz
25cd32d720 feat(bp-opentelemetry): umbrella chart for observability stack
Catalyst Blueprint umbrella for the OpenTelemetry Collector — vendor-
neutral telemetry collector. Sibling to bp-alloy; per-Sovereign overlays
choose one.

Pinned to open-telemetry/opentelemetry-collector 0.152.0 (appVersion
0.150.1) on 2026-04-29. Uses the contrib distribution
(otel/opentelemetry-collector-contrib:0.150.1) so Loki/Mimir/Tempo
exporters are bundled. Deployment mode default (1 replica); DaemonSet
+ StatefulSet are values toggles. All presets default false; ingress
+ ServiceMonitor + PodMonitor + PrometheusRule + NetworkPolicy default
false per BLUEPRINT-AUTHORING.md §11.2.

Part of issue #204 observability-stack umbrellas batch.
2026-04-29 22:02:56 +02:00
hatiyildiz
52d7e15328 feat(bp-alloy): umbrella chart for observability stack
Catalyst Blueprint umbrella for Grafana Alloy — unified telemetry
collector for the LGTM stack (logs, metrics, traces; OTLP-native).

Pinned to grafana/alloy 1.8.0 (appVersion v1.16.0) on 2026-04-29.
DaemonSet controller default (one Alloy per node) so node + container
telemetry work out of the box. Empty Alloy config by default;
per-Sovereign overlays populate forwarders to bp-loki/bp-mimir/bp-tempo
once those reconcile. ServiceMonitor + ingress + CRDs default false per
BLUEPRINT-AUTHORING.md §11.2.

Part of issue #204 observability-stack umbrellas batch.
2026-04-29 22:01:24 +02:00
hatiyildiz
d279296284 feat(bp-tempo): umbrella chart for observability stack
Catalyst Blueprint umbrella for Grafana Tempo — distributed tracing
backend of the LGTM stack. Single-binary mode by default
(solo-Sovereign min); microservice mode (tempo-distributed) is a chart
swap toggle.

Pinned to grafana/tempo 1.24.4 (appVersion 2.9.0) on 2026-04-29. Local
PVC storage default; SeaweedFS S3 wiring is per-Sovereign overlay.
Metrics generator disabled by default (depends on bp-mimir).
ServiceMonitor default false per BLUEPRINT-AUTHORING.md §11.2.

Part of issue #204 observability-stack umbrellas batch.
2026-04-29 22:00:12 +02:00
hatiyildiz
363500b759 feat(bp-mimir): umbrella chart for observability stack
Catalyst Blueprint umbrella for Grafana Mimir — metrics storage tier of
the LGTM stack.

Pinned to grafana/mimir-distributed 6.0.6 (appVersion 3.0.4) on
2026-04-29. Solo-Sovereign defaults: every component scaled to 1
replica, zoneAwareReplication disabled, Kafka ingest-storage disabled.
Bundled MinIO kept enabled as a stop-gap so the chart renders;
SeaweedFS S3 wiring is per-Sovereign overlay. All metaMonitoring
toggles default false per BLUEPRINT-AUTHORING.md §11.2.

Part of issue #204 observability-stack umbrellas batch.
2026-04-29 21:59:03 +02:00
hatiyildiz
bcae54af79 feat(bp-loki): umbrella chart for observability stack
Catalyst Blueprint umbrella for Grafana Loki — log aggregation backend
of the LGTM stack. SingleBinary mode by default (solo-Sovereign min);
SimpleScalable/Distributed are values toggles.

Pinned to grafana/loki 7.0.0 (appVersion 3.6.7) on 2026-04-29.
Filesystem storage default; SeaweedFS S3 wiring is per-Sovereign overlay
when scaling out. All observability toggles default false per
BLUEPRINT-AUTHORING.md §11.2.

Part of issue #204 observability-stack umbrellas batch.
2026-04-29 21:57:33 +02:00
hatiyildiz
6cffe03393 feat(bp-grafana): umbrella chart for observability stack
Catalyst Blueprint umbrella for Grafana — visualization layer of the
LGTM observability stack (Loki/Grafana/Tempo/Mimir).

Pinned to grafana/grafana 10.5.15 (appVersion 12.3.1) — current stable
on 2026-04-29. Solo-Sovereign defaults: 1 replica, 10Gi PVC,
ServiceMonitor disabled per BLUEPRINT-AUTHORING.md §11.2.

Part of issue #204 observability-stack umbrellas batch.
2026-04-29 21:56:27 +02:00
32 changed files with 1272 additions and 0 deletions

View File

@ -0,0 +1,13 @@
apiVersion: catalyst.openova.io/v1
kind: Blueprint
metadata:
name: alloy
labels:
catalyst.openova.io/section: pts-3-observability
spec:
version: 1.0.0
card:
title: Alloy
family: insights
description: Grafana Alloy — telemetry collector (logs/metrics/traces, OTLP-native) for the LGTM observability stack.
docs: https://grafana.com/docs/alloy/latest/

View File

@ -0,0 +1 @@
*.yaml.bak

View File

@ -0,0 +1,28 @@
apiVersion: v2
name: bp-alloy
description: |
Catalyst Blueprint umbrella chart for Grafana Alloy. Depends on the
upstream `alloy` chart (grafana/helm-charts) as a Helm subchart so
`helm dependency build` pulls the upstream payload into this artifact.
Catalyst-curated values flow into the upstream subchart under the
`alloy:` key in values.yaml.
Alloy is the unified telemetry collector for the LGTM stack — receives
OTLP / Prometheus scrape / log tails and forwards to bp-loki (logs),
bp-mimir (metrics), bp-tempo (traces). Default controller is DaemonSet
(one pod per node) so node + pod metrics + container logs are collected.
type: application
version: 1.0.0
appVersion: "v1.16.0"
keywords: [catalyst, blueprint, alloy, observability, otlp, telemetry, collector]
maintainers:
- name: OpenOva Catalyst
email: catalyst@openova.io
# Pinned to grafana/alloy 1.8.0 (appVersion v1.16.0) — current stable on
# 2026-04-29. Per docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode) the
# version is operator-bumpable via PR + Blueprint release.
dependencies:
- name: alloy
version: "1.8.0"
repository: "https://grafana.github.io/helm-charts"

View File

@ -0,0 +1,96 @@
# Catalyst Blueprint umbrella metadata — the upstream chart is resolved as
# a Helm subchart via Chart.yaml `dependencies:`. Catalyst-curated values
# under the `alloy:` key flow into the upstream subchart unchanged.
#
# Per docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode), every operationally-
# meaningful value is configurable; cluster overlays in clusters/<sovereign>/
# may override any of these without rebuilding the Blueprint OCI artifact.
catalystBlueprint:
upstream:
chart: alloy
version: "1.8.0"
repo: "https://grafana.github.io/helm-charts"
# ─── Upstream chart values (subchart key: alloy) ─────────────────────────
alloy:
# Pin upstream Alloy image tag — DO NOT use floating tags.
image:
registry: "docker.io"
repository: grafana/alloy
tag: "v1.16.0"
pullPolicy: IfNotPresent
# Alloy runtime config. Empty by default — the upstream chart ships an
# empty configMap that Alloy starts up against. Per-Sovereign overlays
# populate `alloy.configMap.content` with the full Alloy config (OTLP
# receivers + forwarders to Loki/Mimir/Tempo). Catalyst-side templates/
# may render a default config in a follow-up PR once those Blueprints
# have outputs declared.
alloy:
configMap:
create: true
content: ''
clustering:
enabled: false
# Storage path — emptyDir by default; per-Sovereign overlays MAY pin a
# PVC for persistent WAL.
storagePath: /tmp/alloy
# Resources — modest defaults; per-Sovereign overlays bump for high-
# cardinality clusters.
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 1
memory: 512Mi
# Controller — DaemonSet default (one Alloy pod per node) so node-level
# metrics + container log tails work. Per-Sovereign overlays MAY flip to
# `deployment` for a centralised collector model.
controller:
type: daemonset
replicas: 1
# ServiceMonitor — DEFAULT FALSE per docs/BLUEPRINT-AUTHORING.md §11.2.
serviceMonitor:
enabled: false
# Service — ClusterIP; UI at port 12345.
service:
enabled: true
type: ClusterIP
# Ingress — DEFAULT FALSE; per-Sovereign overlays expose the Alloy UI via
# cilium-gateway HTTPRoute when needed.
ingress:
enabled: false
# NetworkPolicy — DEFAULT FALSE; the Catalyst-side NetworkPolicy template
# in templates/ (when added) governs this for all bp-* charts uniformly.
networkPolicy:
enabled: false
# RBAC — chart manages cluster-scoped Role+Binding so Alloy can discover
# pod/node/service targets.
rbac:
create: true
serviceAccount:
create: true
name: ""
# CRDs — alloy ships PodLogs CRDs as opt-in install. DEFAULT FALSE because
# they're cluster-scoped + may collide with existing kube-prometheus-stack
# operator install. Per-Sovereign overlays flip on once the operator
# is reconciled.
crds:
create: false
# ─── Catalyst overlay values (consumed by templates/ in this chart) ──────
alloyOverlay:
networkPolicy:
enabled: false

View File

@ -0,0 +1,13 @@
apiVersion: catalyst.openova.io/v1
kind: Blueprint
metadata:
name: grafana
labels:
catalyst.openova.io/section: pts-3-observability
spec:
version: 1.0.0
card:
title: Grafana
family: insights
description: Visualization and dashboarding for the LGTM observability stack (Loki/Grafana/Tempo/Mimir).
docs: https://grafana.com/docs/grafana/latest/

View File

@ -0,0 +1 @@
*.yaml.bak

View File

@ -0,0 +1,28 @@
apiVersion: v2
name: bp-grafana
description: |
Catalyst Blueprint umbrella chart for Grafana. Depends on the upstream
`grafana` chart (grafana/helm-charts) as a Helm subchart so
`helm dependency build` pulls the upstream payload into this artifact.
Catalyst-curated values flow into the upstream subchart under the
`grafana:` key in values.yaml.
Visualization layer of the LGTM observability stack — pairs with bp-loki
(logs), bp-tempo (traces), bp-mimir (metrics), and bp-alloy or
bp-opentelemetry (collection).
type: application
version: 1.0.0
appVersion: "12.3.1"
keywords: [catalyst, blueprint, grafana, observability, dashboards]
maintainers:
- name: OpenOva Catalyst
email: catalyst@openova.io
# Pinned to grafana/grafana 10.5.15 (appVersion 12.3.1) — current stable on
# 2026-04-29, validated against Kubernetes 1.31. Per
# docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode) the version is
# operator-bumpable via PR + Blueprint release.
dependencies:
- name: grafana
version: "10.5.15"
repository: "https://grafana.github.io/helm-charts"

View File

@ -0,0 +1,97 @@
# Catalyst Blueprint umbrella metadata — the upstream chart is resolved as
# a Helm subchart via Chart.yaml `dependencies:`. Catalyst-curated values
# under the `grafana:` key flow into the upstream subchart unchanged.
#
# Per docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode), every operationally-
# meaningful value is configurable; cluster overlays in clusters/<sovereign>/
# may override any of these without rebuilding the Blueprint OCI artifact.
catalystBlueprint:
upstream:
chart: grafana
version: "10.5.15"
repo: "https://grafana.github.io/helm-charts"
# ─── Upstream chart values (subchart key: grafana) ────────────────────────
grafana:
# Solo-Sovereign minimum — single replica. Per-Sovereign overlays scale up
# via the upstream `replicas:` value once a regional Sovereign is sized
# for HA.
replicas: 1
# Pin upstream image tag — DO NOT use floating tags per
# docs/INVIOLABLE-PRINCIPLES.md.
image:
repository: grafana/grafana
tag: "12.3.1"
pullPolicy: IfNotPresent
# Persistence — required so dashboard state survives pod restarts. Default
# 10Gi on the cluster's default StorageClass; per-Sovereign overlays SET
# storageClassName when SeaweedFS-backed PVCs are wired (issue #189).
persistence:
enabled: true
type: pvc
size: 10Gi
accessModes:
- ReadWriteOnce
# ServiceMonitor — DEFAULT FALSE per docs/BLUEPRINT-AUTHORING.md §11.2
# (Observability toggles default false — the kube-prometheus-stack CRDs
# may not exist yet on a fresh Sovereign).
serviceMonitor:
enabled: false
# Anonymous + admin defaults — operator overlays SET admin password via a
# Secret reference. NEVER hardcode the admin password here.
adminUser: admin
# `adminPassword` deliberately unset; upstream chart auto-generates a
# random password into a Secret if not provided. Per-Sovereign overlays
# MAY pin it via `admin.existingSecret`.
# SecurityContext — non-root, matches Catalyst defaults across other bp-*.
securityContext:
runAsNonRoot: true
runAsUser: 472
fsGroup: 472
# Service — ClusterIP; ingress is wired by per-Sovereign overlays via the
# cilium-gateway HTTPRoute.
service:
type: ClusterIP
port: 80
targetPort: 3000
# Datasources — empty by default. Per-Sovereign overlays populate this
# with bp-loki / bp-mimir / bp-tempo endpoints once those Blueprints
# reconcile on the cluster.
datasources: {}
# Pre-installed dashboards — empty by default. Per-Sovereign overlays
# populate via `dashboards:` or via dashboard ConfigMaps with the
# `grafana_dashboard: "1"` label.
dashboards: {}
# Resources — modest defaults sized for a solo Sovereign.
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
# RBAC — chart manages its own ServiceAccount + RBAC.
rbac:
create: true
pspEnabled: false
serviceAccount:
create: true
name: ""
# ─── Catalyst overlay values (consumed by templates/ in this chart) ──────
# Reserved for Catalyst-side overlays (NetworkPolicy, ExternalSecret) added
# in a follow-up PR once bp-grafana is consumed in clusters/_template/.
grafanaOverlay:
networkPolicy:
enabled: false

View File

@ -0,0 +1,13 @@
apiVersion: catalyst.openova.io/v1
kind: Blueprint
metadata:
name: langfuse
labels:
catalyst.openova.io/section: pts-3-observability
spec:
version: 1.0.0
card:
title: Langfuse
family: insights
description: LLM observability — traces, evaluations, prompt management, cost attribution. Complements bp-grafana with AI-specific telemetry.
docs: https://langfuse.com/docs

View File

@ -0,0 +1 @@
*.yaml.bak

View File

@ -0,0 +1,35 @@
apiVersion: v2
name: bp-langfuse
description: |
Catalyst Blueprint umbrella chart for Langfuse — LLM observability
platform (traces, evaluations, prompt management, cost attribution).
Depends on the upstream `langfuse` chart (langfuse/langfuse-k8s) as a
Helm subchart so `helm dependency build` pulls the upstream payload
into this artifact. Catalyst-curated values flow into the upstream
subchart under the `langfuse:` key in values.yaml.
Per docs/CLAUDE.md `langfuse.md` Catalyst routes Langfuse's persistent
state through Catalyst-managed dependencies — NOT the bundled Bitnami
subcharts:
- PostgreSQL via cnpg.io/Cluster (Catalyst standard, see bp-cnpg).
- ClickHouse via bp-clickhouse (when authored).
- Redis via bp-valkey (when authored).
- Object storage via SeaweedFS (bp-seaweedfs) S3-compatible endpoint.
All four bundled deploys are disabled by default; per-Sovereign overlays
wire the external endpoints + Secret references.
type: application
version: 1.0.0
appVersion: "3.171.0"
keywords: [catalyst, blueprint, langfuse, observability, llm, ai]
maintainers:
- name: OpenOva Catalyst
email: catalyst@openova.io
# Pinned to langfuse/langfuse 1.5.28 (appVersion 3.171.0) — current stable
# on 2026-04-29. Per docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode) the
# version is operator-bumpable via PR + Blueprint release.
dependencies:
- name: langfuse
version: "1.5.28"
repository: "https://langfuse.github.io/langfuse-k8s"

View File

@ -0,0 +1,162 @@
# Catalyst Blueprint umbrella metadata — the upstream chart is resolved as
# a Helm subchart via Chart.yaml `dependencies:`. Catalyst-curated values
# under the `langfuse:` key flow into the upstream subchart unchanged.
#
# Per docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode), every operationally-
# meaningful value is configurable; cluster overlays in clusters/<sovereign>/
# may override any of these without rebuilding the Blueprint OCI artifact.
catalystBlueprint:
upstream:
chart: langfuse
version: "1.5.28"
repo: "https://langfuse.github.io/langfuse-k8s"
# ─── Upstream chart values (subchart key: langfuse) ──────────────────────
langfuse:
# ──────────────────────────────────────────────────────────────────────
# Catalyst convention: ALL bundled Bitnami subcharts are disabled.
# Catalyst manages persistent state via its own Blueprints:
# - postgresql.deploy=false → cnpg.io/Cluster (bp-cnpg)
# - redis.deploy=false → bp-valkey (when authored)
# - clickhouse.deploy=false → bp-clickhouse (when authored)
# - s3.deploy=false → SeaweedFS S3 (bp-seaweedfs)
# Per-Sovereign overlays wire the external hostnames + Secret references
# to the resources Catalyst already provisions on the host cluster.
# ──────────────────────────────────────────────────────────────────────
# Core langfuse application config.
langfuse:
# Pin upstream Langfuse image — DO NOT use floating tags.
image:
tag: "3.171.0"
pullPolicy: IfNotPresent
pullSecrets: []
# Default replicas — solo-Sovereign minimum is 1 each. Per-Sovereign
# overlays bump for HA. Each component (web, worker) follows this.
replicas: 1
# Required secrets — Catalyst convention: all secret VALUES live in a
# K8s Secret named `langfuse-secrets` in the install namespace, which
# the per-Sovereign overlay creates (via SealedSecret in Phase 0,
# ExternalSecret + OpenBao in Phase 2+). The umbrella ships only the
# POINTERS so the chart renders cleanly without leaking credentials
# into git.
salt:
secretKeyRef:
name: langfuse-secrets
key: salt
encryptionKey:
secretKeyRef:
name: langfuse-secrets
key: encryptionKey
nextauth:
url: ""
secret:
secretKeyRef:
name: langfuse-secrets
key: nextauthSecret
# Telemetry to Langfuse Inc. — DEFAULT FALSE per
# docs/INVIOLABLE-PRINCIPLES.md "no phone-home by default".
features:
telemetryEnabled: false
signUpDisabled: true
experimentalFeaturesEnabled: false
nodeEnv: production
# Ingress — DEFAULT FALSE; per-Sovereign overlays expose Langfuse via
# cilium-gateway HTTPRoute.
ingress:
enabled: false
serviceAccount:
create: true
# PostgreSQL — Catalyst routes via cnpg.io/Cluster (bp-cnpg). The
# bundled Bitnami subchart is OFF; per-Sovereign overlays MUST set
# `postgresql.host` to the cnpg `<cluster>-rw` Service and reference a
# Secret created by the cnpg Cluster's `bootstrap.initdb.secret`.
postgresql:
deploy: false
host: "langfuse-postgres-rw"
port: 5432
auth:
username: postgres
database: postgres_langfuse
existingSecret: "langfuse-postgresql-secret"
secretKeys:
userPasswordKey: password
adminPasswordKey: password
migration:
autoMigrate: true
# Redis / Valkey — Catalyst routes via bp-valkey (when authored). The
# bundled Bitnami subchart is OFF; per-Sovereign overlays MUST set
# `redis.host` + reference an `existingSecret`.
redis:
deploy: false
host: "langfuse-redis"
port: 6379
auth:
username: "default"
existingSecret: "langfuse-redis-secret"
existingSecretPasswordKey: password
# ClickHouse — Catalyst routes via bp-clickhouse (when authored). The
# bundled Bitnami subchart is OFF; per-Sovereign overlays MUST set
# `clickhouse.host` + reference an `existingSecret`.
clickhouse:
deploy: false
host: "langfuse-clickhouse"
httpPort: 8123
nativePort: 9000
database: default
auth:
username: default
existingSecret: "langfuse-clickhouse-secret"
existingSecretKey: password
migration:
url: "clickhouse://default@langfuse-clickhouse:9000"
ssl: false
autoMigrate: true
clusterEnabled: false
shards: 1
replicaCount: 1
# S3 — Catalyst routes via SeaweedFS (bp-seaweedfs) S3-compatible
# endpoint. The bundled Bitnami MinIO subchart is OFF; per-Sovereign
# overlays MUST set `s3.endpoint` + reference Secrets for accessKeyId
# and secretAccessKey.
s3:
deploy: false
storageProvider: "s3"
bucket: "langfuse"
region: "auto"
endpoint: "http://seaweedfs-s3.openova-system.svc.cluster.local:8333"
forcePathStyle: true
accessKeyId:
secretKeyRef:
name: "langfuse-s3-secret"
key: accessKeyId
secretAccessKey:
secretKeyRef:
name: "langfuse-s3-secret"
key: secretAccessKey
# ─── Catalyst overlay values (consumed by templates/ in this chart) ──────
# Reserved for the cnpg.io/Cluster + ExternalSecret + NetworkPolicy
# overlays added in a follow-up PR once bp-cnpg outputs are wired.
langfuseOverlay:
# cnpg.io/Cluster overlay — DEFAULT FALSE because the cnpg CRDs may not
# exist on a fresh cluster. Per-Sovereign overlays flip this on once
# bp-cnpg has reconciled.
cnpgCluster:
enabled: false
instances: 1
storage:
size: 20Gi
networkPolicy:
enabled: false

View File

@ -0,0 +1,13 @@
apiVersion: catalyst.openova.io/v1
kind: Blueprint
metadata:
name: loki
labels:
catalyst.openova.io/section: pts-3-observability
spec:
version: 1.0.0
card:
title: Loki
family: insights
description: Log aggregation backend for the LGTM observability stack. Single-binary mode for solo Sovereign; scalable mode is a values toggle.
docs: https://grafana.com/docs/loki/latest/

View File

@ -0,0 +1 @@
*.yaml.bak

View File

@ -0,0 +1,30 @@
apiVersion: v2
name: bp-loki
description: |
Catalyst Blueprint umbrella chart for Grafana Loki. Depends on the
upstream `loki` chart (grafana/helm-charts) as a Helm subchart so
`helm dependency build` pulls the upstream payload into this artifact.
Catalyst-curated values flow into the upstream subchart under the
`loki:` key in values.yaml.
Default deployment shape is SingleBinary (one Loki StatefulSet) — minimum
for a solo Sovereign. SimpleScalable / Distributed modes are a values
toggle (`loki.deploymentMode`) once a regional Sovereign needs HA log
ingestion. Object storage is filesystem (PVC) by default; per-Sovereign
overlays MUST flip storage to S3 (SeaweedFS / cloud) when SimpleScalable
or Distributed is selected — the upstream chart enforces this.
type: application
version: 1.0.0
appVersion: "3.6.7"
keywords: [catalyst, blueprint, loki, observability, logs]
maintainers:
- name: OpenOva Catalyst
email: catalyst@openova.io
# Pinned to grafana/loki 7.0.0 (appVersion 3.6.7) — current stable on
# 2026-04-29. Per docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode) the
# version is operator-bumpable via PR + Blueprint release.
dependencies:
- name: loki
version: "7.0.0"
repository: "https://grafana.github.io/helm-charts"

View File

@ -0,0 +1,124 @@
# Catalyst Blueprint umbrella metadata — the upstream chart is resolved as
# a Helm subchart via Chart.yaml `dependencies:`. Catalyst-curated values
# under the `loki:` key flow into the upstream subchart unchanged.
#
# Per docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode), every operationally-
# meaningful value is configurable; cluster overlays in clusters/<sovereign>/
# may override any of these without rebuilding the Blueprint OCI artifact.
catalystBlueprint:
upstream:
chart: loki
version: "7.0.0"
repo: "https://grafana.github.io/helm-charts"
# ─── Upstream chart values (subchart key: loki) ──────────────────────────
loki:
# SingleBinary mode — solo-Sovereign default. One Loki StatefulSet does
# ingestion + querying + compaction. Per-Sovereign overlays flip this to
# `SimpleScalable` (read/write/backend pods) or `Distributed` once HA is
# required AND object storage is wired. The upstream chart will refuse to
# render SimpleScalable/Distributed against filesystem storage.
deploymentMode: SingleBinary
# Loki core config — SingleBinary uses filesystem storage out of the box.
# Per-Sovereign overlays MUST replace `storage:` with `s3:` (SeaweedFS or
# cloud object storage) before flipping deploymentMode.
loki:
auth_enabled: false
commonConfig:
replication_factor: 1
schemaConfig:
configs:
- from: "2024-04-01"
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: loki_index_
period: 24h
storage:
type: filesystem
bucketNames:
chunks: chunks
ruler: ruler
admin: admin
# Pin upstream image — DO NOT use floating tags.
image:
registry: docker.io
repository: grafana/loki
tag: "3.6.7"
pullPolicy: IfNotPresent
# SingleBinary StatefulSet — 1 replica, 10Gi PVC for chunks + index.
singleBinary:
replicas: 1
persistence:
enabled: true
size: 10Gi
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 1
memory: 1Gi
# SimpleScalable / Distributed pools — DEFAULT 0 replicas. Per-Sovereign
# overlays bump these (and flip `deploymentMode`) when scaling out.
read:
replicas: 0
write:
replicas: 0
backend:
replicas: 0
# Gateway (nginx in front of Loki) — DISABLED for SingleBinary; the
# SingleBinary StatefulSet serves directly. SimpleScalable overlays flip
# this on.
gateway:
enabled: false
# Memcached for chunks/results cache — disabled for SingleBinary; not
# needed at solo-Sovereign scale.
chunksCache:
enabled: false
resultsCache:
enabled: false
# Loki Canary (synthetic log producer/verifier) — DISABLED by default;
# drains a DaemonSet pod per node and pulls in monitoring CRDs.
lokiCanary:
enabled: false
# Helm test pods — DISABLED. We don't ship `helm test` infra in production.
test:
enabled: false
# Bundled MinIO subchart — DISABLED. Catalyst routes object storage
# through SeaweedFS when SimpleScalable/Distributed is selected; never
# MinIO embedded.
minio:
enabled: false
# Monitoring (dashboards + PrometheusRules) — DEFAULT FALSE per
# docs/BLUEPRINT-AUTHORING.md §11.2. Enabled by per-Sovereign overlays
# once kube-prometheus-stack is reconciled.
monitoring:
dashboards:
enabled: false
rules:
enabled: false
serviceMonitor:
enabled: false
selfMonitoring:
enabled: false
grafanaAgent:
installOperator: false
lokiCanary:
enabled: false
# ─── Catalyst overlay values (consumed by templates/ in this chart) ──────
lokiOverlay:
networkPolicy:
enabled: false

View File

@ -0,0 +1,13 @@
apiVersion: catalyst.openova.io/v1
kind: Blueprint
metadata:
name: mimir
labels:
catalyst.openova.io/section: pts-3-observability
spec:
version: 1.0.0
card:
title: Mimir
family: insights
description: Horizontally scalable, highly available Prometheus-compatible metrics storage for the LGTM observability stack.
docs: https://grafana.com/docs/mimir/latest/

View File

@ -0,0 +1 @@
*.yaml.bak

View File

@ -0,0 +1,29 @@
apiVersion: v2
name: bp-mimir
description: |
Catalyst Blueprint umbrella chart for Grafana Mimir. Depends on the
upstream `mimir-distributed` chart (grafana/helm-charts) as a Helm
subchart so `helm dependency build` pulls the upstream payload into this
artifact. Catalyst-curated values flow into the upstream subchart under
the `mimir-distributed:` key in values.yaml.
Mimir is the metrics storage tier of the LGTM stack. The upstream chart
is microservice-mode by default; per-component replicas are scaled to 1
for a solo Sovereign and zone-aware replication is disabled. Per-Sovereign
overlays scale per-component replicas + flip zone-aware-replication on
for HA Sovereigns.
type: application
version: 1.0.0
appVersion: "3.0.4"
keywords: [catalyst, blueprint, mimir, observability, metrics, prometheus]
maintainers:
- name: OpenOva Catalyst
email: catalyst@openova.io
# Pinned to grafana/mimir-distributed 6.0.6 (appVersion 3.0.4) — current
# stable on 2026-04-29. Per docs/INVIOLABLE-PRINCIPLES.md #4 (never
# hardcode) the version is operator-bumpable via PR + Blueprint release.
dependencies:
- name: mimir-distributed
version: "6.0.6"
repository: "https://grafana.github.io/helm-charts"

View File

@ -0,0 +1,106 @@
# Catalyst Blueprint umbrella metadata — the upstream chart is resolved as
# a Helm subchart via Chart.yaml `dependencies:`. Catalyst-curated values
# under the `mimir-distributed:` key flow into the upstream subchart
# unchanged.
#
# Per docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode), every operationally-
# meaningful value is configurable; cluster overlays in clusters/<sovereign>/
# may override any of these without rebuilding the Blueprint OCI artifact.
catalystBlueprint:
upstream:
chart: mimir-distributed
version: "6.0.6"
repo: "https://grafana.github.io/helm-charts"
# ─── Upstream chart values (subchart key: mimir-distributed) ─────────────
mimir-distributed:
# Pin the global image tag — DO NOT use floating tags.
image:
repository: grafana/mimir
tag: "3.0.4"
pullPolicy: IfNotPresent
# Per-component replicas — solo-Sovereign minimum is 1 each. Per-Sovereign
# overlays scale these up and flip zoneAwareReplication on for HA.
alertmanager:
replicas: 1
zoneAwareReplication:
enabled: false
distributor:
replicas: 1
ingester:
replicas: 1
zoneAwareReplication:
enabled: false
querier:
replicas: 1
query_frontend:
replicas: 1
query_scheduler:
replicas: 1
store_gateway:
replicas: 1
zoneAwareReplication:
enabled: false
compactor:
replicas: 1
ruler:
replicas: 1
overrides_exporter:
replicas: 1
# Gateway (nginx in front of Mimir) — keep enabled, scale to 1.
gateway:
enabled: true
replicas: 1
# Bundled MinIO — DEFAULT ON only as a stop-gap so the chart renders;
# Catalyst routes object storage through SeaweedFS. Per-Sovereign
# overlays MUST set `minio.enabled: false` AND wire S3 credentials
# pointing at SeaweedFS (or cloud object storage) before production use.
# The bundled MinIO is single-replica, ephemeral-friendly, and not for
# production data.
minio:
enabled: true
# Kafka (ingest-storage architecture) — DISABLED. The classic blocks-
# storage architecture is the Catalyst default; Kafka ingest-storage is
# an opt-in path that requires bp-strimzi to be reconciled first.
kafka:
enabled: false
# MetaMonitoring (ServiceMonitor + Grafana dashboards) — DEFAULT FALSE
# per docs/BLUEPRINT-AUTHORING.md §11.2.
metaMonitoring:
serviceMonitor:
enabled: false
dashboards:
enabled: false
grafanaAgent:
installOperator: false
logs:
enabled: false
metrics:
enabled: false
# Smoke + continuous-test pods — disabled in production.
smoke_test:
enabled: false
continuous_test:
enabled: false
# rollout-operator subchart — keep enabled (it manages zone-aware rollouts
# and is referenced by the upstream chart even with zone-awareness off).
rollout_operator:
enabled: true
replicas: 1
# Service account — chart manages its own.
serviceAccount:
create: true
# ─── Catalyst overlay values (consumed by templates/ in this chart) ──────
mimirOverlay:
networkPolicy:
enabled: false

View File

@ -0,0 +1,13 @@
apiVersion: catalyst.openova.io/v1
kind: Blueprint
metadata:
name: opentelemetry
labels:
catalyst.openova.io/section: pts-3-observability
spec:
version: 1.0.0
card:
title: OpenTelemetry Collector
family: insights
description: Vendor-neutral telemetry collector (OTLP receiver/forwarder). Sibling to bp-alloy — Catalyst supports both; per-Sovereign overlays choose one.
docs: https://opentelemetry.io/docs/collector/

View File

@ -0,0 +1 @@
*.yaml.bak

View File

@ -0,0 +1,32 @@
apiVersion: v2
name: bp-opentelemetry
description: |
Catalyst Blueprint umbrella chart for the OpenTelemetry Collector.
Depends on the upstream `opentelemetry-collector` chart
(open-telemetry/helm-charts) as a Helm subchart so `helm dependency
build` pulls the upstream payload into this artifact. Catalyst-curated
values flow into the upstream subchart under the
`opentelemetry-collector:` key in values.yaml.
The OTel Collector is the vendor-neutral counterpart to bp-alloy;
Catalyst supports both in the observability stack and per-Sovereign
overlays choose one. Default deployment mode is `deployment` (a single
collector pod handling OTLP traffic from in-cluster apps) — DaemonSet /
StatefulSet are values toggles for node-level collection / WAL
persistence respectively.
type: application
version: 1.0.0
appVersion: "0.150.1"
keywords: [catalyst, blueprint, opentelemetry, otlp, observability, telemetry, collector]
maintainers:
- name: OpenOva Catalyst
email: catalyst@openova.io
# Pinned to open-telemetry/opentelemetry-collector 0.152.0 (appVersion
# 0.150.1) — current stable on 2026-04-29. Per
# docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode) the version is
# operator-bumpable via PR + Blueprint release.
dependencies:
- name: opentelemetry-collector
version: "0.152.0"
repository: "https://open-telemetry.github.io/opentelemetry-helm-charts"

View File

@ -0,0 +1,104 @@
# Catalyst Blueprint umbrella metadata — the upstream chart is resolved as
# a Helm subchart via Chart.yaml `dependencies:`. Catalyst-curated values
# under the `opentelemetry-collector:` key flow into the upstream subchart
# unchanged.
#
# Per docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode), every operationally-
# meaningful value is configurable; cluster overlays in clusters/<sovereign>/
# may override any of these without rebuilding the Blueprint OCI artifact.
catalystBlueprint:
upstream:
chart: opentelemetry-collector
version: "0.152.0"
repo: "https://open-telemetry.github.io/opentelemetry-helm-charts"
# ─── Upstream chart values (subchart key: opentelemetry-collector) ───────
opentelemetry-collector:
# Mode — required by the upstream chart. `deployment` is the solo-Sovereign
# default (one collector pod, scaled to 1). Per-Sovereign overlays MAY set
# `daemonset` for node-level collection or `statefulset` for WAL
# persistence. The chart will refuse to render without `mode` set.
mode: deployment
# Pin upstream Collector image — DO NOT use floating tags. Use the
# contrib distribution (`otel/opentelemetry-collector-contrib`) so that
# exporters for Loki / Mimir / Tempo / Prometheus / Kafka are bundled
# without requiring a custom build. The `command.name` MUST match the
# binary inside the contrib image (`otelcol-contrib`).
image:
repository: otel/opentelemetry-collector-contrib
tag: "0.150.1"
pullPolicy: IfNotPresent
command:
name: otelcol-contrib
# Solo-Sovereign minimum — single replica. Per-Sovereign overlays bump
# via the upstream `replicaCount:` value once HA is sized.
replicaCount: 1
# Resources — modest defaults; the memory_limiter processor's defaults
# (limit_percentage: 80, spike_limit_percentage: 25) reference these.
resources:
limits:
cpu: 1
memory: 512Mi
# Presets — DEFAULT FALSE. Each preset adds opinionated pipelines
# (filelog/hostmetrics/k8sattributes/etc.). Per-Sovereign overlays opt
# into the ones they need; defaulting on would surprise operators with
# cluster-scoped RBAC + log volume mounts.
presets:
logsCollection:
enabled: false
hostMetrics:
enabled: false
kubernetesAttributes:
enabled: false
kubeletMetrics:
enabled: false
kubernetesEvents:
enabled: false
clusterMetrics:
enabled: false
# Service — ClusterIP; ingress wired by per-Sovereign overlays.
service:
enabled: true
type: ClusterIP
# Ingress — DEFAULT FALSE.
ingress:
enabled: false
# ServiceMonitor + PodMonitor + PrometheusRule — DEFAULT FALSE per
# docs/BLUEPRINT-AUTHORING.md §11.2.
serviceMonitor:
enabled: false
podMonitor:
enabled: false
prometheusRule:
enabled: false
# NetworkPolicy — DEFAULT FALSE.
networkPolicy:
enabled: false
# PodDisruptionBudget — disabled at solo-Sovereign scale (1 replica).
podDisruptionBudget:
enabled: false
# ServiceAccount + RBAC — chart manages its own.
serviceAccount:
create: true
clusterRole:
create: false
# Default ports — keep upstream defaults (OTLP gRPC 4317 + HTTP 4318).
# Per-Sovereign overlays disable jaeger / zipkin / prometheus receivers
# via the `ports:` block when they're not needed.
# ─── Catalyst overlay values (consumed by templates/ in this chart) ──────
opentelemetryOverlay:
networkPolicy:
enabled: false

View File

@ -0,0 +1,13 @@
apiVersion: catalyst.openova.io/v1
kind: Blueprint
metadata:
name: tempo
labels:
catalyst.openova.io/section: pts-3-observability
spec:
version: 1.0.0
card:
title: Tempo
family: insights
description: Distributed tracing backend for the LGTM observability stack. Single-binary mode for solo Sovereign; tempo-distributed is a values toggle.
docs: https://grafana.com/docs/tempo/latest/

View File

@ -0,0 +1 @@
*.yaml.bak

View File

@ -0,0 +1,32 @@
apiVersion: v2
name: bp-tempo
description: |
Catalyst Blueprint umbrella chart for Grafana Tempo. Depends on the
upstream `tempo` chart (grafana/helm-charts) — single-binary mode — as
a Helm subchart so `helm dependency build` pulls the upstream payload
into this artifact. Catalyst-curated values flow into the upstream
subchart under the `tempo:` key in values.yaml.
Default deployment shape is single-binary (one Tempo StatefulSet) —
minimum for a solo Sovereign. The microservice-mode variant (upstream
`tempo-distributed` chart) is a values toggle once a regional Sovereign
needs HA trace ingestion; per-Sovereign overlays MAY swap the upstream
dependency by republishing this chart with a different `dependencies:`
entry.
type: application
version: 1.0.0
appVersion: "2.9.0"
keywords: [catalyst, blueprint, tempo, observability, traces]
maintainers:
- name: OpenOva Catalyst
email: catalyst@openova.io
# Pinned to grafana/tempo 1.24.4 (appVersion 2.9.0, single-binary) — current
# stable on 2026-04-29. Per docs/INVIOLABLE-PRINCIPLES.md #4 (never
# hardcode) the version is operator-bumpable via PR + Blueprint release.
# To switch to microservice mode, swap this entry for `tempo-distributed`
# and republish.
dependencies:
- name: tempo
version: "1.24.4"
repository: "https://grafana.github.io/helm-charts"

View File

@ -0,0 +1,80 @@
# Catalyst Blueprint umbrella metadata — the upstream chart is resolved as
# a Helm subchart via Chart.yaml `dependencies:`. Catalyst-curated values
# under the `tempo:` key flow into the upstream subchart unchanged.
#
# Per docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode), every operationally-
# meaningful value is configurable; cluster overlays in clusters/<sovereign>/
# may override any of these without rebuilding the Blueprint OCI artifact.
catalystBlueprint:
upstream:
chart: tempo
version: "1.24.4"
repo: "https://grafana.github.io/helm-charts"
# ─── Upstream chart values (subchart key: tempo) ─────────────────────────
tempo:
# Single-binary StatefulSet — solo-Sovereign default.
replicas: 1
# Pin upstream Tempo image tag — DO NOT use floating tags.
tempo:
registry: docker.io
repository: grafana/tempo
tag: "2.9.0"
pullPolicy: IfNotPresent
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 1
memory: 1Gi
multitenancyEnabled: false
reportingEnabled: false
# Metrics generator — DEFAULT FALSE. Pulls in remote_write to Mimir
# which must be reconciled first. Per-Sovereign overlays flip this on
# once bp-mimir is up.
metricsGenerator:
enabled: false
# Storage backend — local PVC default. Per-Sovereign overlays MUST
# replace this with S3 (SeaweedFS or cloud) for HA Sovereigns.
storage:
trace:
backend: local
local:
path: /var/tempo/traces
wal:
path: /var/tempo/wal
# Persistence — required for trace retention.
persistence:
enabled: true
size: 10Gi
accessModes:
- ReadWriteOnce
# ServiceMonitor — DEFAULT FALSE per docs/BLUEPRINT-AUTHORING.md §11.2.
serviceMonitor:
enabled: false
# Service — ClusterIP; ingress wired by per-Sovereign overlays.
service:
type: ClusterIP
# NetworkPolicy — DEFAULT FALSE; the Catalyst-side NetworkPolicy template
# in templates/ (when added) governs this for all bp-* charts uniformly.
networkPolicy:
enabled: false
# tempoQuery (legacy Tempo Query container) — DISABLED. Grafana queries
# Tempo directly via the trace API.
tempoQuery:
enabled: false
# ─── Catalyst overlay values (consumed by templates/ in this chart) ──────
tempoOverlay:
networkPolicy:
enabled: false

View File

@ -0,0 +1,13 @@
apiVersion: catalyst.openova.io/v1
kind: Blueprint
metadata:
name: velero
labels:
catalyst.openova.io/section: pts-3-observability
spec:
version: 1.0.0
card:
title: Velero
family: insights
description: Kubernetes-native backup and disaster recovery. Backups land in SeaweedFS (Catalyst's unified S3 layer), which transitions to a cloud archival backend.
docs: https://velero.io/docs/

View File

@ -0,0 +1 @@
*.yaml.bak

View File

@ -0,0 +1,35 @@
apiVersion: v2
name: bp-velero
description: |
Catalyst Blueprint umbrella chart for Velero. Depends on the upstream
`velero` chart (vmware-tanzu/helm-charts) as a Helm subchart so
`helm dependency build` pulls the upstream payload into this artifact.
Catalyst-curated values flow into the upstream subchart under the
`velero:` key in values.yaml.
Velero is the per-host-cluster backup engine. Per docs/PLATFORM-TECH-
STACK.md §3.5, ALL Velero output goes to the same single S3 endpoint —
SeaweedFS (Catalyst's unified S3 encapsulation layer). SeaweedFS
handles tiered storage: hot in-cluster for recent backups, cold to a
cloud archival backend (Cloudflare R2 / Hetzner Object Storage / etc.)
when objects age past the warm window.
Per-Sovereign overlays MUST set `velero.configuration.backupStorage
Location` to point at SeaweedFS (provider: aws, s3ForcePathStyle: true,
s3Url pointing at the SeaweedFS S3 service) and supply credentials via
the `credentials` block.
type: application
version: 1.0.0
appVersion: "1.18.0"
keywords: [catalyst, blueprint, velero, backup, disaster-recovery]
maintainers:
- name: OpenOva Catalyst
email: catalyst@openova.io
# Pinned to vmware-tanzu/velero 12.0.1 (appVersion 1.18.0) — current
# stable on 2026-04-29. Per docs/INVIOLABLE-PRINCIPLES.md #4 (never
# hardcode) the version is operator-bumpable via PR + Blueprint release.
dependencies:
- name: velero
version: "12.0.1"
repository: "https://vmware-tanzu.github.io/helm-charts"

View File

@ -0,0 +1,142 @@
# Catalyst Blueprint umbrella metadata — the upstream chart is resolved as
# a Helm subchart via Chart.yaml `dependencies:`. Catalyst-curated values
# under the `velero:` key flow into the upstream subchart unchanged.
#
# Per docs/INVIOLABLE-PRINCIPLES.md #4 (never hardcode), every operationally-
# meaningful value is configurable; cluster overlays in clusters/<sovereign>/
# may override any of these without rebuilding the Blueprint OCI artifact.
catalystBlueprint:
upstream:
chart: velero
version: "12.0.1"
repo: "https://vmware-tanzu.github.io/helm-charts"
# ─── Upstream chart values (subchart key: velero) ────────────────────────
velero:
# Pin upstream Velero image — DO NOT use floating tags.
image:
repository: docker.io/velero/velero
tag: "v1.18.0"
pullPolicy: IfNotPresent
# kubectl image (used by upgradeCRDs Job) — pin to a known-stable
# registry.k8s.io/kubectl version. Per-Sovereign overlays MAY pin a
# specific tag.
kubectl:
image:
repository: registry.k8s.io/kubectl
# Plugin init containers — REQUIRED for Velero to talk to ANY backup
# backend. The AWS plugin (S3-compatible) is the Catalyst standard
# because SeaweedFS exposes an S3 API. Per-Sovereign overlays append
# additional plugin init containers if a CSI snapshotter / cloud-native
# plugin is also needed.
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:v1.14.0
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /target
name: plugins
# Resources — modest defaults sized for a solo Sovereign.
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 1
memory: 512Mi
# Metrics — Prometheus scraping enabled by upstream defaults; the
# ServiceMonitor / PodMonitor / PrometheusRule resources MUST default
# OFF per docs/BLUEPRINT-AUTHORING.md §11.2 (Observability toggles
# default false — kube-prometheus-stack CRDs may not exist on a fresh
# Sovereign).
metrics:
enabled: true
serviceMonitor:
autodetect: false
enabled: false
nodeAgentPodMonitor:
autodetect: false
enabled: false
prometheusRule:
autodetect: false
enabled: false
# CRD lifecycle — upgrade in place; never auto-cleanup.
upgradeCRDs: true
cleanUpCRDs: false
# BackupStorageLocation + VolumeSnapshotLocation — empty provider/bucket
# in this umbrella. Per-Sovereign overlays MUST replace `configuration:`
# with concrete values pointing at SeaweedFS:
#
# configuration:
# backupStorageLocation:
# - name: default
# provider: aws
# bucket: velero-backups
# default: true
# config:
# region: auto
# s3ForcePathStyle: "true"
# s3Url: http://seaweedfs-s3.openova-system.svc.cluster.local:8333
# credential:
# name: velero-seaweedfs-secret
# key: cloud
configuration:
backupStorageLocation:
- name:
provider: ""
bucket: ""
default: false
accessMode: ReadWrite
credential:
name:
key:
config: {}
volumeSnapshotLocation:
- name:
provider: ""
credential:
name:
key:
config: {}
# Whether to create backupstoragelocation/volumesnapshotlocation CRs.
# Both flipped FALSE here so the chart renders with empty/placeholder
# configuration above; per-Sovereign overlays flip these on after they
# supply real config.
backupsEnabled: false
snapshotsEnabled: false
# Node Agent (file-system backup / restic) — DEFAULT FALSE; per-Sovereign
# overlays flip on if file-system backup is needed (CSI snapshots are
# the preferred path).
deployNodeAgent: false
# Credentials — chart auto-creates a Secret from `secretContents.cloud`
# ONLY when per-Sovereign overlay supplies that value. Default keeps
# `useSecret: true` and `existingSecret: ""` so the Secret is created
# but EMPTY at this layer — overlay populates it.
credentials:
useSecret: true
existingSecret: ""
secretContents: {}
# ServiceAccount + RBAC.
serviceAccount:
server:
create: true
name: velero
rbac:
create: true
clusterAdministrator: true
# ─── Catalyst overlay values (consumed by templates/ in this chart) ──────
veleroOverlay:
networkPolicy:
enabled: false