Skip to main content

Overview

core-ai provides observability support through its middleware system. Observability middleware hooks into every model operation to capture traces, usage metrics, and error details without changing your application code. You wrap your model with an observability middleware, and all calls through that model are automatically traced.
import { wrapChatModel, generate } from '@core-ai/core-ai';
import { createOtelMiddleware } from '@core-ai/opentelemetry';

const tracedModel = wrapChatModel({
  model,
  middleware: createOtelMiddleware(),
});

const result = await generate({
  model: tracedModel,
  messages: [{ role: 'user', content: 'Hello!' }],
});

Available integrations

OpenTelemetry

Export traces to any OpenTelemetry-compatible backend (Jaeger, Grafana, Datadog, etc.)

Langfuse

Track generations, token usage, and costs in Langfuse

How it works

Observability integrations are standard middleware applied via wrapChatModel, wrapEmbeddingModel, or wrapImageModel. Each factory function returns a middleware object that hooks into model operations to record spans or observations. You can combine observability middleware with other middleware. The order in the array controls execution order — place observability middleware first to capture the full duration of the call including any other middleware processing.
import { wrapChatModel } from '@core-ai/core-ai';
import { createOtelMiddleware } from '@core-ai/opentelemetry';

const model = wrapChatModel({
  model: baseModel,
  middleware: [createOtelMiddleware(), retryMiddleware, validationMiddleware],
});