Skip to main content

Overview

The generate() function generates a single response from a chat model. It’s the core function for synchronous text generation with support for tool calls, reasoning, and structured conversations.

Function signature

export async function generate(
    params: GenerateParams
): Promise<GenerateResult>

export type GenerateParams = GenerateOptions & {
    model: ChatModel;
};

Parameters

model
ChatModel
required
The chat model instance to use for generation.
messages
Message[]
required
Array of messages in the conversation. Must not be empty.
temperature
number
Sampling temperature (0-2). Higher values make output more random.
maxTokens
number
Maximum number of tokens to generate.
topP
number
Nucleus sampling parameter (0-1).
reasoning
ReasoningConfig
Configuration for extended thinking/reasoning capabilities.
tools
ToolSet
Object mapping tool names to tool definitions. Enables the model to call functions.
toolChoice
ToolChoice
Controls how the model uses tools:
  • 'auto' - Model decides whether to use tools
  • 'none' - Model won’t use tools
  • 'required' - Model must use a tool
  • { type: 'tool', toolName: string } - Force specific tool
providerOptions
GenerateProviderOptions
Provider-specific options, namespaced by provider name (e.g. { openai: { user: '...' } }).
signal
AbortSignal
AbortSignal for cancelling the request.

Return value

Returns a Promise<GenerateResult> with the following properties:
parts
AssistantContentPart[]
Array of content parts in the response (text, reasoning, tool calls).
content
string | null
Concatenated text content from all text parts. null if no text was generated.
reasoning
string | null
Concatenated reasoning content. null if no reasoning was generated.
toolCalls
ToolCall[]
Array of tool calls made by the model.
finishReason
FinishReason
Why generation stopped: 'stop', 'length', 'tool-calls', 'content-filter', or 'unknown'.
usage
ChatUsage
Token usage statistics including input/output tokens and cache details.

Examples

Basic text generation

import { generate } from '@core-ai/core-ai';
import { createOpenAI } from '@core-ai/openai';

const openai = createOpenAI();
const model = openai.chatModel('gpt-5-mini');

const result = await generate({
  model,
  messages: [
    { role: 'user', content: 'What is the capital of France?' }
  ]
});

console.log(result.content);

With configuration

const result = await generate({
  model,
  messages: [
    { role: 'user', content: 'Write a creative story' }
  ],
  temperature: 1.5,
  maxTokens: 500,
});

With tools

import { generate, defineTool } from '@core-ai/core-ai';
import { z } from 'zod';

const weatherTool = defineTool({
  name: 'get_weather',
  description: 'Get weather for a location',
  parameters: z.object({
    location: z.string()
  })
});

const result = await generate({
  model,
  messages: [
    { role: 'user', content: 'What\'s the weather in Paris?' }
  ],
  tools: {
    get_weather: weatherTool
  }
});

if (result.toolCalls.length > 0) {
  console.log('Tool called:', result.toolCalls[0].name);
  console.log('Arguments:', result.toolCalls[0].arguments);
}

Multi-turn conversation

const result = await generate({
  model,
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Hello!' },
    { role: 'assistant', parts: [{ type: 'text', text: 'Hi! How can I help?' }] },
    { role: 'user', content: 'Tell me a joke' }
  ]
});

Error handling

Throws ValidationError if:
  • Messages array is empty
May also throw:
  • ProviderError if the provider returns an error during generation
import { ValidationError } from '@core-ai/core-ai';

try {
  const result = await generate({
    model,
    messages: []
  });
} catch (error) {
  if (error instanceof ValidationError) {
    console.error('Generation failed:', error.message);
  }
}