openova/platform/langfuse
e3mrah 0cfd0defa9
fix(bp-langfuse): drop apostrophe from description to clear GHCR 500 (resolves #215) (#278)
Root cause: Helm's `helm push` collapses the chart `description` field
into a single-line OCI manifest annotation
`org.opencontainers.image.description`. The GHCR manifest-PUT validator
returns a deterministic 500 Internal Server Error when that annotation
is long AND contains an ASCII apostrophe. bp-langfuse 1.0.0 was the
only chart in the observability batch (PR #214) carrying both
characteristics, so it was the only one that failed to publish.

Fix: reword the affected sentence from "Langfuse's persistent state" to
"the Langfuse persistent state" — drops the apostrophe, preserves the
meaning, and crucially preserves every byte of the actual chart payload
(values, templates, all 350 entries of the upstream langfuse-1.5.28
subchart with its 4-level-deep Bitnami vendoring). No runtime
behavioural change; helm template renders the exact same 6 resources
across 490 lines.

The narrowing was done by progressively reducing the Chart.yaml from
the failing version to a passing version while pushing to a scratch
GHCR namespace, with the bp-langfuse repo deleted between attempts
(verified via `DELETE /orgs/openova-io/packages/container/bp-langfuse`
and re-querying). The trigger is reproducible: long description +
apostrophe → 500; long description without apostrophe → push succeeds;
short description with apostrophe → push succeeds.

Added a multi-line WARNING comment immediately above `description:`
documenting the trigger so future authors do not reintroduce a
possessive form. Issue #215 captures the full reproduction.

Closes #215

Co-authored-by: hatiyildiz <hatice.yildiz@openova.io>
2026-04-30 17:31:51 +04:00
..
chart fix(bp-langfuse): drop apostrophe from description to clear GHCR 500 (resolves #215) (#278) 2026-04-30 17:31:51 +04:00
blueprint.yaml feat(platform): observability stack umbrellas (grafana/loki/mimir/tempo/alloy/otel/langfuse/velero) (#214) 2026-04-29 22:11:04 +02:00
README.md docs(pass-12): role-in-Catalyst banners on 11 AI/ML Application Blueprints 2026-04-27 21:47:45 +02:00

LangFuse

LLM observability and analytics. Application Blueprint (see docs/PLATFORM-TECH-STACK.md §4.7). Traces every LLM call in bp-cortex — latency, tokens, cost, eval scores. Catalyst's general-purpose observability stack (Grafana/OTel) covers infrastructure; LangFuse covers the AI-specific dimensions (prompt/response, model drift, eval).

Category: AI Observability | Type: Application Blueprint


Overview

LangFuse provides tracing, evaluation, and analytics for LLM applications. It captures every LLM call with cost, latency, token usage, and evaluation scores. Complements Grafana (which handles infrastructure metrics) with AI-specific observability.

Key Features

  • LLM call tracing (input, output, cost, latency, tokens)
  • Prompt management and versioning
  • Evaluation scoring and datasets
  • User analytics and session tracking
  • Cost attribution per model/user/feature

Integration

Component Integration
LLM Gateway Automatic trace capture
Grafana Infrastructure metrics complement
CNPG PostgreSQL backend for traces
NeMo Guardrails Traces guardrail activations

Used By

  • OpenOva Cortex - LLM observability for enterprise AI

Deployment

apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: langfuse
  namespace: flux-system
spec:
  interval: 10m
  path: ./platform/langfuse
  prune: true

Part of OpenOva