generate() function provides synchronous chat completion, returning a complete response from the language model.
Basic Usage
Generate a simple chat completion:Using Different Providers
Core AI supports multiple providers with the same API:- OpenAI
- Anthropic
- Google
- Mistral
Configuration Options
Customize model behavior with configuration parameters:Response Structure
Thegenerate() function returns a GenerateResult object:
Understanding Token Usage
Multi-Turn Conversations
Build conversations by including previous messages:Error Handling
Handle errors gracefully:Best Practices
Use system messages for consistent behavior
Use system messages for consistent behavior
System messages set the assistant’s behavior and context:
Set appropriate token limits
Set appropriate token limits
Control costs and response length with
maxTokens:Handle finish reasons appropriately
Handle finish reasons appropriately
Check why generation stopped: