Adds the 6 CompositeResourceDefinitions and matching Compositions that back the catalyst-api Day-2 CRUD endpoints. catalyst-api writes XRCs of these kinds; Crossplane materialises them into provider-hcloud (and a small number of provider-kubernetes) managed resources. Per docs/INVIOLABLE-PRINCIPLES.md #3, every cloud-side op flows through provider-hcloud — never bespoke hcloud-go calls or shell-outs to the hcloud CLI. XRDs (canonical group: compose.openova.io/v1alpha1): - RegionClaim → composes the Phase-0 quartet via provider-hcloud: Network + NetworkSubnet + Firewall + Server (cp1) + LoadBalancer + LoadBalancerNetwork + LoadBalancerService×2 + LoadBalancerTarget. Mirrors infra/hetzner/main.tf 1:1 so deletion of a RegionClaim cascades the whole slice. - ClusterClaim → composes a provider-kubernetes Object that materialises a cluster-identity ConfigMap. The catalyst-environment-controller reads the CM to template per-server cloud-init. - NodePoolClaim → composes up to 100 provider-hcloud Server resources. UPDATE flow: patching replicas n→m flips the per-index Required-policy gate so Crossplane creates/deletes Server CRs. - LoadBalancerClaim → composes provider-hcloud LoadBalancer + LoadBalancerNetwork + up to 50 LoadBalancerService entries (per listener) + up to 50 LoadBalancerTarget entries. UPDATE: patch listeners[]/targets[] → composite controller adds/removes services/targets. - PeeringClaim → composes 1 or 2 provider-hcloud Route resources (bidirectional flag toggles the second one through a Required-policy gate). - NodeActionClaim → composes a provider-kubernetes Object that creates a batch/v1 Job running kubectl cordon/drain (k8s-side op, not a cloud op, per the task spec). action=replace additionally composes a provider-hcloud Server for the replacement node. UPDATE/DELETE summary: - UPDATE: every mutable schema field is patched onto the underlying managed resource; Crossplane's composite controller drives the diff and provider-hcloud reconciles to the new state. - DELETE: every composed resource has deletionPolicy: Delete, so a cascade delete of the composite tears down the whole resource graph in dependency-safe order (Crossplane retries until deps unblock). New tests: - tests/composition-validate.sh — 7 gates: helm renders cleanly, exactly 6 XRDs, ≥ 6 Compositions, all 6 expected claim kinds present, every rendered doc is valid YAML, every fixture references a real XRD, and (when KUBECONFIG + Crossplane CRDs available) server-side dry-run for every fixture. - tests/fixtures/<kind>-sample.yaml — one XRC fixture per kind. Version bump: - platform/crossplane/chart/Chart.yaml 1.1.1 → 1.1.2 - platform/crossplane/blueprint.yaml 1.1.1 → 1.1.2 - clusters/_template/bootstrap-kit/04-crossplane.yaml → 1.1.2 - clusters/otech.omani.works/bootstrap-kit/04-crossplane.yaml → 1.1.2 Hard rules respected: - provider-hcloud only for cloud ops (never hcloud-go, never CLI). - provider-kubernetes Object for k8s-side ops (never raw kubectl). - No bespoke kubectl manifests for cloud resources. - Frontend + catalyst-api Go code untouched (sibling-owned). - Target state, no MVP framing — all 6 Compositions ship. Co-authored-by: hatiyildiz <hatice.yildiz@openova.io> Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
23 lines
515 B
YAML
23 lines
515 B
YAML
# NodePoolClaim sample fixture.
|
|
apiVersion: compose.openova.io/v1alpha1
|
|
kind: NodePoolClaim
|
|
metadata:
|
|
name: omantel-edge-pool
|
|
namespace: crossplane-system
|
|
spec:
|
|
parameters:
|
|
name: omantel-edge
|
|
clusterRef:
|
|
name: omantel-cluster
|
|
namespace: crossplane-system
|
|
sku: cpx21
|
|
replicas: 3
|
|
role: worker
|
|
region: fsn1
|
|
image: ubuntu-24.04
|
|
sshKeyName: catalyst-omantel-omani-works
|
|
networkId: "1234567"
|
|
firewallIds: ["7654321"]
|
|
providerConfigRef:
|
|
name: default-hcloud
|