Skip to main content

Overview

core-ai supports three types of models, each designed for specific tasks:
  • Chat Models: Generate text responses, support conversations with tool calling
  • Embedding Models: Convert text into vector representations for semantic search
  • Image Models: Generate images from text prompts

Chat Models

Chat models are the most versatile, supporting text generation, conversations, tool calling, and structured output.

Interface

type ChatModel = {
  readonly provider: string;
  readonly modelId: string;
  generate(options: GenerateOptions): Promise<GenerateResult>;
  stream(options: GenerateOptions): Promise<ChatStream>;
  generateObject<TSchema extends z.ZodType>(
    options: GenerateObjectOptions<TSchema>
  ): Promise<GenerateObjectResult<TSchema>>;
  streamObject<TSchema extends z.ZodType>(
    options: StreamObjectOptions<TSchema>
  ): Promise<ObjectStream<TSchema>>;
};

Basic Text Generation

import { createOpenAI } from '@core-ai/openai';
import { generate } from '@core-ai/core-ai';

const openai = createOpenAI();
const model = openai.chatModel('gpt-5-mini');

const result = await generate({
  model,
  messages: [
    { role: 'user', content: 'Explain quantum computing in one sentence.' }
  ],
});

console.log(result.content);
// "Quantum computing uses quantum bits that can exist in multiple states..."

Streaming Responses

import { stream } from '@core-ai/core-ai';

const response = await stream({
  model,
  messages: [
    { role: 'user', content: 'Write a short story.' }
  ],
});

for await (const event of response) {
  if (event.type === 'text-delta') {
    process.stdout.write(event.text);
  }
}

Structured Output

Generate type-safe structured data using Zod schemas:
import { z } from 'zod';
import { generateObject } from '@core-ai/core-ai';

const schema = z.object({
  name: z.string(),
  age: z.number(),
  hobbies: z.array(z.string()),
});

const result = await generateObject({
  model,
  messages: [
    { role: 'user', content: 'Generate a random person profile.' }
  ],
  schema,
  schemaName: 'Person',
});

console.log(result.object);
// { name: "Alice Smith", age: 28, hobbies: ["reading", "hiking"] }

Tool Calling

Extend model capabilities with function tools:
import { defineTool } from '@core-ai/core-ai';
import { z } from 'zod';

const tools = {
  getWeather: defineTool({
    name: 'getWeather',
    description: 'Get current weather for a location',
    parameters: z.object({
      location: z.string().describe('City name'),
      unit: z.enum(['celsius', 'fahrenheit']).optional(),
    }),
  }),
};

const result = await generate({
  model,
  messages: [
    { role: 'user', content: 'What\'s the weather in Paris?' }
  ],
  tools,
});

if (result.toolCalls.length > 0) {
  console.log(result.toolCalls[0]);
  // { id: "call_123", name: "getWeather", arguments: { location: "Paris" } }
}

Generate result

generate() returns a GenerateResult with parts, content, reasoning, toolCalls, finishReason, and usage.

Stream events

stream() returns a replayable ChatStream that emits reasoning, text, tool-call, and finish events while also exposing .result and .events. See the types reference for the full GenerateResult, StreamEvent, and FinishReason type definitions.

Embedding Models

Embedding models convert text into numerical vectors for semantic similarity and search.

Interface

type EmbeddingModel = {
  readonly provider: string;
  readonly modelId: string;
  embed(options: EmbedOptions): Promise<EmbedResult>;
};

Basic Usage

import { embed } from '@core-ai/core-ai';
import { createOpenAI } from '@core-ai/openai';

const openai = createOpenAI();
const model = openai.embeddingModel('text-embedding-3-small');

const result = await embed({
  model,
  input: 'The quick brown fox jumps over the lazy dog',
});

console.log(result.embeddings[0].length);
// 1536 (dimensions)

Batch Embedding

const result = await embed({
  model,
  input: [
    'First document',
    'Second document',
    'Third document',
  ],
});

console.log(result.embeddings.length);
// 3

Custom Dimensions

const result = await embed({
  model,
  input: 'Sample text',
  dimensions: 256, // Reduce dimensions for faster search
});

Embed Result

type EmbedResult = {
  embeddings: number[][]; // Array of embedding vectors
  usage?: EmbeddingUsage; // Optional token usage (provider-dependent)
};

type EmbeddingUsage = {
  inputTokens: number;
};

Image Models

Image models generate images from text descriptions.

Interface

type ImageModel = {
  readonly provider: string;
  readonly modelId: string;
  generate(options: ImageGenerateOptions): Promise<ImageGenerateResult>;
};

Basic Usage

import { generateImage } from '@core-ai/core-ai';
import { createOpenAI } from '@core-ai/openai';

const openai = createOpenAI();
const model = openai.imageModel('gpt-image-1');

const result = await generateImage({
  model,
  prompt: 'A futuristic city at sunset with flying cars',
});

console.log(result.images[0]);
// { base64: "...", revisedPrompt: "..." }

Generate Options

type ImageGenerateOptions = {
  prompt: string; // Text description of desired image
  n?: number; // Number of images to generate
  size?: string; // Image size (e.g., "1024x1024")
  providerOptions?: ImageProviderOptions; // Provider-specific options
};

Multiple Images

const result = await generateImage({
  model,
  prompt: 'Abstract art with geometric shapes',
  n: 4, // Generate 4 variations
  size: '512x512',
});

console.log(result.images.length);
// 4

Image Result

type ImageGenerateResult = {
  images: GeneratedImage[];
};

type GeneratedImage = {
  base64?: string; // Base64-encoded image data
  url?: string; // URL to hosted image
  revisedPrompt?: string; // Provider-revised prompt
};
Different providers may return images as URLs, base64 data, or both. Check the provider documentation for specific behavior.

Model Properties

All models expose two readonly properties:
const model = openai.chatModel('gpt-5-mini');

console.log(model.provider); // "openai"
console.log(model.modelId); // "gpt-5-mini"
These properties are useful for logging, debugging, and tracking which models are used in your application.

Next steps