Skip to main content

Overview

The generate() function generates a single response from a chat model. It’s the core function for synchronous text generation with support for tool calls, reasoning, and structured conversations.

Function Signature

export async function generate(
    params: GenerateParams
): Promise<GenerateResult>

export type GenerateParams = GenerateOptions & {
    model: ChatModel;
};

Parameters

model
ChatModel
required
The chat model instance to use for generation.
messages
Message[]
required
Array of messages in the conversation. Must not be empty.
reasoning
ReasoningConfig
Configuration for extended thinking/reasoning capabilities.
tools
ToolSet
Object mapping tool names to tool definitions. Enables the model to call functions.
toolChoice
ToolChoice
Controls how the model uses tools:
  • 'auto' - Model decides whether to use tools
  • 'none' - Model won’t use tools
  • 'required' - Model must use a tool
  • { type: 'tool', toolName: string } - Force specific tool
config
ModelConfig
Model configuration parameters.
providerOptions
Record<string, unknown>
Provider-specific options that are passed through to the underlying model.
signal
AbortSignal
AbortSignal for cancelling the request.

Return Value

Returns a Promise<GenerateResult> with the following properties:
parts
AssistantContentPart[]
Array of content parts in the response (text, reasoning, tool calls).
content
string | null
Concatenated text content from all text parts. null if no text was generated.
reasoning
string | null
Concatenated reasoning content. null if no reasoning was generated.
toolCalls
ToolCall[]
Array of tool calls made by the model.
finishReason
FinishReason
Why generation stopped: 'stop', 'length', 'tool-calls', 'content-filter', or 'unknown'.
usage
ChatUsage
Token usage statistics including input/output tokens and cache details.

Examples

Basic Text Generation

import { generate } from '@coreai/core';
import { openai } from '@coreai/openai';

const result = await generate({
  model: openai('gpt-4'),
  messages: [
    { role: 'user', content: 'What is the capital of France?' }
  ]
});

console.log(result.content); // "Paris"

With Configuration

const result = await generate({
  model: openai('gpt-4'),
  messages: [
    { role: 'user', content: 'Write a creative story' }
  ],
  config: {
    temperature: 1.5,
    maxTokens: 500
  }
});

With Tools

import { defineTool } from '@coreai/core';
import { z } from 'zod';

const weatherTool = defineTool({
  name: 'get_weather',
  description: 'Get weather for a location',
  parameters: z.object({
    location: z.string()
  })
});

const result = await generate({
  model: openai('gpt-4'),
  messages: [
    { role: 'user', content: 'What\'s the weather in Paris?' }
  ],
  tools: {
    get_weather: weatherTool
  }
});

if (result.toolCalls.length > 0) {
  console.log('Tool called:', result.toolCalls[0].name);
  console.log('Arguments:', result.toolCalls[0].arguments);
}

Multi-turn Conversation

const result = await generate({
  model: openai('gpt-4'),
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Hello!' },
    { role: 'assistant', parts: [{ type: 'text', text: 'Hi! How can I help?' }] },
    { role: 'user', content: 'Tell me a joke' }
  ]
});

Error Handling

Throws LLMError if:
  • Messages array is empty
  • Model encounters an error during generation
try {
  const result = await generate({
    model: openai('gpt-4'),
    messages: []
  });
} catch (error) {
  if (error instanceof LLMError) {
    console.error('Generation failed:', error.message);
  }
}

Source Location

~/workspace/source/packages/core-ai/src/generate.ts:12