generate() function provides synchronous chat completion, returning a complete response from the language model.
Basic Usage
Generate a simple chat completion:Using Different Providers
core-ai supports multiple providers with the same API:- OpenAI
- Anthropic
- Google
- Mistral
Configuration options
Customize model behavior with configuration parameters:temperature, maxTokens, and topP are top-level options. stopSequences, frequencyPenalty, and presencePenalty are provider-specific and passed via providerOptions. Options like stopSequences and frequencyPenalty are available with createOpenAICompat (Chat Completions API). The default createOpenAI (Responses API) supports a different set of options — see Configuration for details.Response Structure
Thegenerate() function returns a GenerateResult object:
Understanding Token Usage
Multi-turn conversations
Build conversations by including previous messages. UseresultToMessage() to convert a GenerateResult into an AssistantMessage:
Error Handling
Handle errors gracefully:Best Practices
Use system messages for consistent behavior
Use system messages for consistent behavior
System messages set the assistant’s behavior and context:
Set appropriate token limits
Set appropriate token limits
Control costs and response length with
maxTokens:Handle finish reasons appropriately
Handle finish reasons appropriately
Check why generation stopped:
Next Steps
Streaming
Stream responses in real-time for better UX
Tool Calling
Let models use tools and functions
Multi-Modal
Work with images and files
Structured Outputs
Get type-safe JSON responses