Documentation Index
Fetch the complete documentation index at: https://docs.core-ai.dev/llms.txt
Use this file to discover all available pages before exploring further.
Overview
core-ai exports helpers for schema conversion, assistant message construction, stream creation, model middleware wrapping, and provider-specific metadata access.
zodSchemaToJsonSchema()
Convert a Zod schema to JSON Schema using Zod 4’s native conversion APIs.
import { z } from 'zod';
import { zodSchemaToJsonSchema } from '@core-ai/core-ai';
const weatherSchema = z.object({
city: z.string(),
temperatureC: z.number(),
});
const jsonSchema = zodSchemaToJsonSchema(weatherSchema);
console.log(jsonSchema);
Use this helper when you need a JSON Schema outside the built-in model and tool helpers.
resultToMessage()
Convert a GenerateResult into an AssistantMessage so you can continue a conversation with the assistant’s full response parts.
import { generate, resultToMessage } from '@core-ai/core-ai';
const result = await generate({
model,
messages: [{ role: 'user', content: 'Solve this step by step.' }],
reasoning: { effort: 'high' },
});
const assistantTurn = resultToMessage(result);
Options
type ResultToMessageOptions = {
includeReasoning?: boolean;
};
Set includeReasoning: false if you want to drop reasoning parts when reusing the message.
assistantMessage()
Create a simple assistant message from plain text.
import { assistantMessage } from '@core-ai/core-ai';
const greeting = assistantMessage('Hello, how can I help?');
This is equivalent to constructing an assistant message with a single text part.
Read provider-specific metadata from reasoning parts in a type-safe way.
import { generate, getProviderMetadata } from '@core-ai/core-ai';
import type { OpenAIReasoningMetadata } from '@core-ai/openai';
const result = await generate({
model,
messages: [{ role: 'user', content: 'Think carefully before answering.' }],
reasoning: { effort: 'high' },
});
for (const part of result.parts) {
if (part.type !== 'reasoning') continue;
const metadata = getProviderMetadata<OpenAIReasoningMetadata>(
part.providerMetadata,
'openai'
);
console.log(metadata?.encryptedContent);
}
The top-level key matches the provider name, such as 'anthropic', 'google', or 'openai'.
createChatStream()
Create a ChatStream from an async iterable of StreamEvents. This is useful when building custom providers or testing.
import { createChatStream } from '@core-ai/core-ai';
import type { StreamEvent } from '@core-ai/core-ai';
const events: StreamEvent[] = [
{ type: 'text-delta', text: 'Hello' },
{ type: 'finish', finishReason: 'stop', usage: { inputTokens: 5, outputTokens: 1, inputTokenDetails: { cacheReadTokens: 0, cacheWriteTokens: 0 }, outputTokenDetails: {} } },
];
async function* generate() {
for (const event of events) yield event;
}
const chatStream = createChatStream(generate());
The source parameter accepts an AsyncIterable<StreamEvent> or a () => Promise<AsyncIterable<StreamEvent>> factory function. You can pass an optional { signal: AbortSignal } options object.
createObjectStream()
Create an ObjectStream from an async iterable of ObjectStreamEvents.
import { createObjectStream } from '@core-ai/core-ai';
const objectStream = createObjectStream(asyncIterable);
Like createChatStream, this accepts an optional { signal: AbortSignal } options object.
wrapChatModel()
Wrap a ChatModel with middleware to add cross-cutting behavior like logging, validation, or tracing.
import { wrapChatModel } from '@core-ai/core-ai';
import type { ChatModelMiddleware } from '@core-ai/core-ai';
const wrappedModel = wrapChatModel({
model,
middleware: myMiddleware, // single middleware or array
});
Returns a new ChatModel with the same interface. See the Middleware concept page for details on writing and composing middleware.
wrapEmbeddingModel()
Wrap an EmbeddingModel with middleware.
import { wrapEmbeddingModel } from '@core-ai/core-ai';
const wrappedModel = wrapEmbeddingModel({
model: embeddingModel,
middleware: myMiddleware,
});
wrapImageModel()
Wrap an ImageModel with middleware.
import { wrapImageModel } from '@core-ai/core-ai';
const wrappedModel = wrapImageModel({
model: imageModel,
middleware: myMiddleware,
});
stripModelDateSuffix()
Remove a trailing -YYYYMMDD date suffix from a model ID. This is used internally for model capability lookups.
import { stripModelDateSuffix } from '@core-ai/core-ai';
stripModelDateSuffix('gpt-5-mini-20250101'); // 'gpt-5-mini'
stripModelDateSuffix('gpt-5-mini'); // 'gpt-5-mini'
asObject()
Safely cast an unknown value to Record<string, unknown>. Returns an empty object if the input is not an object.
import { asObject } from '@core-ai/core-ai';
const obj = asObject(someValue);
safeParseJsonObject()
Safely parse a JSON string into an object. Returns undefined if parsing fails or the result is not an object.
import { safeParseJsonObject } from '@core-ai/core-ai';
const result = safeParseJsonObject('{"key": "value"}');
// { key: 'value' }
const invalid = safeParseJsonObject('not json');
// undefined