docs(pass-9): role-in-Catalyst banners on grafana / harbor / falco / kyverno / sigstore / syft-grype
Pass 9 — six more component READMEs got Catalyst-role banners matching the rule of thumb in CLAUDE.md (every platform/<x>/README.md should state its role in Catalyst). - grafana: observability stack on every host cluster; Catalyst's own self-monitoring + Application telemetry flows here. - harbor: per-host-cluster container registry for Catalyst images, mirrored Blueprint OCI artifacts, customer images. - falco: runtime security on every host cluster; feeds SIEM/SOAR. - kyverno: policy engine on every host cluster; enforces Catalyst policy contracts (cosign on Blueprints, default-deny NetworkPolicies on Organization namespaces, priority-class injection). - sigstore: cosign-signed Blueprint OCI artifacts + admission verification chain on every host cluster. - syft-grype: SBOM generation in CI per Blueprint + runtime CVE scans. Plus Kyverno priority-class clarification: prose around `tenant-high` / `tenant-default` / `tenant-batch` priority class names now reads "Organization workloads" instead of "tenant workloads", with an explicit note that the priority class artifact names themselves stay as-is until a separate migration ticket renames them in deployed clusters (renaming PriorityClass objects requires recreate, not in-place rename). VALIDATION-LOG: Pass 9 entry added. Refs #37
This commit is contained in:
parent
14ed84de41
commit
ea81c38e15
@ -63,6 +63,18 @@ ARCHITECTURE §10 had 3 phases; SOVEREIGN-PROVISIONING §3-§6 has 4 phases. Ali
|
||||
- ARCHITECTURE §3 topology diagram listed Crossplane, Flux, Harbor, grafana-stack INSIDE the Catalyst control-plane block. But §11 and PLATFORM-TECH-STACK §3 both classify these as per-host-cluster infrastructure (not Catalyst control plane). Topology diagram corrected; per-host-cluster infra now shown as a separate line referencing PLATFORM-TECH-STACK §3 for the full list. Also added the previously-missing `provisioning` row.
|
||||
- JetStream Account scoping was contradictory: ARCHITECTURE §5 said "Per-Org account: ws.{org}-{env_type}.>" (ambiguous), NAMING-CONVENTION §11.2 said "One JetStream Account scoped to ws.{org}-{env_type}.>" (per-Env), GLOSSARY+SECURITY+PLATFORM-TECH-STACK said per-Org. Reconciled to: one Account per Organization, subjects within use prefix `ws.{org}-{env_type}.>` for per-Environment partitioning. Fixed in ARCHITECTURE §5 and NAMING-CONVENTION §11.2.
|
||||
|
||||
### Pass 9 — more component README banners + Kyverno priority-class clarification
|
||||
|
||||
Added role-in-Catalyst banners to:
|
||||
- **grafana** — observability stack on every host cluster; Catalyst self-monitoring + Application telemetry pipeline.
|
||||
- **harbor** — per-host-cluster container registry for Catalyst images, mirrored Blueprint OCI artifacts, customer images.
|
||||
- **falco** — runtime security on every host cluster, feeds SIEM/SOAR pipeline.
|
||||
- **kyverno** — policy engine on every host cluster; enforces cosign signature requirement, default-deny NetworkPolicies on Organization namespaces, etc.
|
||||
- **sigstore** — signing + admission verification, signs every Blueprint OCI artifact, Kyverno denies unsigned/wrong-issuer at admission.
|
||||
- **syft-grype** — SBOM generation in CI + runtime CVE scanning.
|
||||
|
||||
Plus Kyverno priority-class clarification: priority class names `tenant-high`, `tenant-default`, `tenant-batch` are legacy deployment artifacts. The prose around them now says "Organization workloads" instead of "tenant workloads", with an explicit note that the priority class names themselves stay as-is until a separate migration ticket renames them in deployed clusters.
|
||||
|
||||
### Pass 8 — component README role-in-Catalyst banners + dead-link fix
|
||||
|
||||
Continued the drift sweep into more component READMEs.
|
||||
|
||||
@ -1,8 +1,8 @@
|
||||
# Grafana Stack
|
||||
|
||||
LGTM observability stack for OpenOva platform.
|
||||
LGTM observability stack (Loki, Grafana, Tempo, Mimir + Alloy collector). Per-host-cluster infrastructure (see [`docs/PLATFORM-TECH-STACK.md`](../../docs/PLATFORM-TECH-STACK.md) §3 / observability layer in §2.3) — runs on every host cluster a Sovereign owns. Catalyst's own self-monitoring uses this stack on the management cluster; Application telemetry from per-Org vclusters also flows here unless an Org installs its own observability stack.
|
||||
|
||||
**Status:** Accepted | **Updated:** 2026-01-17
|
||||
**Status:** Accepted | **Updated:** 2026-04-27
|
||||
|
||||
---
|
||||
|
||||
|
||||
@ -1,8 +1,8 @@
|
||||
# Kyverno
|
||||
|
||||
Policy engine for OpenOva platform resilience, security, and operational excellence.
|
||||
Policy engine for admission control, mutation, and policy generation. Per-host-cluster infrastructure (see [`docs/PLATFORM-TECH-STACK.md`](../../docs/PLATFORM-TECH-STACK.md) §3.3) — runs on every host cluster Catalyst manages. Enforces Catalyst's policy contracts (cosign-required-on-Blueprints, default-deny NetworkPolicies on Organization namespaces, priority-class injection, etc.).
|
||||
|
||||
**Status:** Accepted | **Updated:** 2026-02-09
|
||||
**Status:** Accepted | **Updated:** 2026-04-27
|
||||
|
||||
---
|
||||
|
||||
@ -170,10 +170,12 @@ Namespace-level tier labels drive ResourceQuota and PriorityClass injection.
|
||||
| Tier Label Value | CPU Quota | Memory Quota | PriorityClass | Use Case |
|
||||
|------------------|-----------|--------------|---------------|----------|
|
||||
| `platform` | 16 cores | 32Gi | `platform-critical` (1000000) | OpenOva system components |
|
||||
| `high` | 8 cores | 16Gi | `tenant-high` (100000) | Production tenant workloads |
|
||||
| `default` | 4 cores | 8Gi | `tenant-default` (10000) | Development / staging |
|
||||
| `high` | 8 cores | 16Gi | `tenant-high` (100000) | Production Organization workloads |
|
||||
| `default` | 4 cores | 8Gi | `tenant-default` (10000) | Development / staging Organization workloads |
|
||||
| `batch` | 2 cores | 4Gi | `tenant-batch` (1000) | Background jobs, scale-to-zero |
|
||||
|
||||
> The `tenant-*` priority class names are legacy deployment artifacts. They map to **Organization** workloads in current terminology. Renaming the priority class names themselves is tracked as a separate migration item — until then, the names remain as-is in deployed clusters.
|
||||
|
||||
---
|
||||
|
||||
## Interaction with Scaling Components
|
||||
|
||||
Loading…
Reference in New Issue
Block a user