openova/platform/vllm
hatiyildiz 9d95043ccc docs(pass-12): role-in-Catalyst banners on 11 AI/ML Application Blueprints
All AI/ML component READMEs got banners pointing at PLATFORM-TECH-
STACK §4.6 (AI/ML) or §4.7 (AI safety + observability), and noting
composition under bp-cortex (composite AI Hub Blueprint):

- knative: serverless for KServe-managed inference.
- kserve: K8s-native model serving for vLLM, BGE, custom.
- vllm: default LLM inference runtime.
- milvus: vector database for RAG retrieval.
- neo4j: knowledge-graph-augmented retrieval alongside Milvus.
- librechat: default chat surface, fronts LLM Gateway via Guardrails.
- bge: embedding generation + reranking.
- llm-gateway: outbound LLM routing (Claude, GPT-4, vLLM, Axon).
- anthropic-adapter: OpenAI-SDK → Anthropic translation.
- nemo-guardrails: AI safety firewall.
- langfuse: LLM observability (latency, tokens, cost, eval).

All 11 are explicitly Application Blueprints — NOT Catalyst control
plane. Catalyst's own observability stack (Grafana/OTel) covers
infrastructure; LangFuse covers AI-specific dimensions
(prompt/response/eval).

VALIDATION-LOG: Pass 12 entry added.

Refs #37
2026-04-27 21:47:45 +02:00
..
README.md docs(pass-12): role-in-Catalyst banners on 11 AI/ML Application Blueprints 2026-04-27 21:47:45 +02:00

vLLM

High-performance LLM inference engine with PagedAttention. Application Blueprint (see docs/PLATFORM-TECH-STACK.md §4.6). Default LLM serving runtime in bp-cortex (the composite AI Hub Blueprint).

Status: Accepted | Updated: 2026-04-27


Overview

vLLM provides high-throughput LLM serving with efficient memory management via PagedAttention. Recommended runtime for LLM inference in OpenOva.

flowchart LR
    subgraph vLLM["vLLM Engine"]
        PagedAttn[PagedAttention]
        Scheduler[Continuous Batching]
        KVCache[KV Cache Management]
    end

    subgraph API["OpenAI-Compatible API"]
        Chat[/v1/chat/completions]
        Completions[/v1/completions]
        Models[/v1/models]
    end

    Request[Request] --> API
    API --> vLLM
    vLLM --> GPU[GPU]

Why vLLM?

Feature Benefit
PagedAttention 24x higher throughput than HuggingFace
Continuous batching Efficient request handling
OpenAI-compatible API Drop-in replacement
Tensor parallelism Multi-GPU support
Quantization AWQ, GPTQ, INT8 support

Supported Models

Model Family Examples
Qwen Qwen2.5, Qwen3 (recommended)
Llama Llama 3.1, Llama 3.2
Mistral Mistral, Mixtral
DeepSeek DeepSeek-R1, DeepSeek-V3
Others Phi, Gemma, Yi, etc.

Configuration

Deployment via KServe

apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
  name: qwen-32b
  namespace: ai-hub
spec:
  predictor:
    model:
      modelFormat:
        name: vllm
      runtime: vllm-runtime
      storageUri: pvc://model-cache/models/qwen3-32b-awq
    resources:
      requests:
        nvidia.com/gpu: "2"
      limits:
        nvidia.com/gpu: "2"

Standalone Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: vllm
  namespace: ai-hub
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vllm
  template:
    spec:
      containers:
        - name: vllm
          image: vllm/vllm-openai:latest
          args:
            - --model=/models/qwen3-32b-awq
            - --tensor-parallel-size=2
            - --max-model-len=32768
            - --gpu-memory-utilization=0.9
            - --enable-prefix-caching
          ports:
            - containerPort: 8000
          resources:
            requests:
              nvidia.com/gpu: "2"
            limits:
              nvidia.com/gpu: "2"
          volumeMounts:
            - name: model-cache
              mountPath: /models
      volumes:
        - name: model-cache
          persistentVolumeClaim:
            claimName: model-cache

Key Parameters

Parameter Purpose Example
--model Model path or HuggingFace ID /models/qwen3-32b
--tensor-parallel-size Number of GPUs 2
--max-model-len Maximum context length 32768
--gpu-memory-utilization GPU memory fraction 0.9
--quantization Quantization method awq, gptq
--enable-prefix-caching Cache common prefixes -

API Usage

Chat Completions

curl http://vllm.ai-hub.svc:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen3-32b",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ],
    "stream": true
  }'

With Thinking Mode (Qwen3)

curl http://vllm.ai-hub.svc:8000/v1/chat/completions \
  -d '{
    "model": "qwen3-32b",
    "messages": [
      {"role": "user", "content": "Solve this step by step: ..."}
    ],
    "extra_body": {
      "chat_template_kwargs": {"enable_thinking": true}
    }
  }'

Multi-GPU Configuration

Tensor Parallelism (Single Node)

args:
  - --tensor-parallel-size=4  # Split model across 4 GPUs

Pipeline Parallelism (Multi-Node)

args:
  - --pipeline-parallel-size=2  # Split across 2 nodes
  - --tensor-parallel-size=4    # 4 GPUs per node

Quantization

Method Memory Reduction Quality
AWQ ~4x Excellent
GPTQ ~4x Good
INT8 ~2x Very Good
FP8 ~2x Excellent
args:
  - --quantization=awq
  - --dtype=half

Monitoring

Metric Query
Request latency vllm:request_latency_seconds
Tokens/second vllm:generation_tokens_total
GPU memory vllm:gpu_cache_usage_perc
Queue length vllm:num_requests_waiting

Consequences

Positive:

  • Industry-leading performance
  • OpenAI-compatible API
  • Excellent quantization support
  • Multi-GPU scaling
  • Active development

Negative:

  • GPU required
  • Memory-intensive for large models
  • Some models not yet supported

Part of OpenOva