Overview
@core-ai/langfuse provides middleware that records model operations as Langfuse observations. Generations, embeddings, and image operations are tracked with model parameters, token usage, and costs in the Langfuse dashboard.
Installation
npm install @core-ai/langfuse @langfuse/otel @langfuse/tracing
@langfuse/otel and @langfuse/tracing are peer dependencies. You need a Langfuse project configured with the appropriate environment variables (LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, and optionally LANGFUSE_BASEURL).
Usage
Chat models
import { wrapChatModel, generate } from '@core-ai/core-ai';
import { createLangfuseMiddleware } from '@core-ai/langfuse';
const tracedModel = wrapChatModel({
model,
middleware: createLangfuseMiddleware(),
});
const result = await generate({
model: tracedModel,
messages: [{ role: 'user', content: 'Explain quantum computing.' }],
});
All four chat operations (generate, stream, generateObject, streamObject) are tracked. For streaming operations, the observation stays open until the stream completes and captures the final usage.
Embedding models
import { wrapEmbeddingModel, embed } from '@core-ai/core-ai';
import { createLangfuseEmbeddingMiddleware } from '@core-ai/langfuse';
const tracedModel = wrapEmbeddingModel({
model: embeddingModel,
middleware: createLangfuseEmbeddingMiddleware(),
});
const result = await embed({
model: tracedModel,
input: 'Sample text for embedding',
});
Image models
import { wrapImageModel, generateImage } from '@core-ai/core-ai';
import { createLangfuseImageMiddleware } from '@core-ai/langfuse';
const tracedModel = wrapImageModel({
model: imageModel,
middleware: createLangfuseImageMiddleware(),
});
const result = await generateImage({
model: tracedModel,
prompt: 'A mountain landscape at sunset',
});
Options
All three factory functions accept an optional LangfuseMiddlewareOptions object:
type LangfuseMiddlewareOptions = {
recordContent?: boolean;
};
recordContent
When true, input messages and output content are recorded on the Langfuse observation. Defaults to false to avoid sending sensitive data.
const middleware = createLangfuseMiddleware({ recordContent: true });
The metadata field on generation, embed, and image options is forwarded to the Langfuse observation. Use this to attach custom data to your traces.
const result = await generate({
model: tracedModel,
messages: [{ role: 'user', content: 'Hello!' }],
metadata: {
userId: 'user-123',
feature: 'onboarding',
},
});
Tracked attributes
Each Langfuse observation includes:
| Attribute | Description |
|---|
model | Model ID |
modelParameters | Temperature, max tokens, top-p (when set) |
input | Input messages (when recordContent is true) |
output | Response content (when recordContent is true) |
usageDetails | Input tokens, output tokens, cache tokens, reasoning tokens |
metadata | Custom metadata from the options |
level | Set to ERROR on failures |
statusMessage | Error message on failures |