Overview
The OpenAI provider gives you access to GPT-5 models, o-series reasoning models, embeddings, and image generation. By default,createOpenAI uses the Responses API. For the Chat Completions API, use createOpenAICompat from @core-ai/openai/compat.
Installation
createOpenAI()
Create an OpenAI provider instance using the Responses API.Options
Your OpenAI API key. Defaults to
OPENAI_API_KEY environment variable.Custom base URL for API requests. Useful for proxies or OpenAI-compatible APIs.
Provide your own configured OpenAI client instance.
Returns
OpenAIProvider with methods chatModel(), embeddingModel(), and imageModel().
createOpenAICompat()
Create an OpenAI provider instance using the Chat Completions API.createOpenAI. Returns OpenAICompatProvider with the same model factory methods.
Use createOpenAICompat when you need Chat Completions API compatibility, for example with third-party OpenAI-compatible endpoints.
Provider methods
chatModel()
embeddingModel()
imageModel()
Supported models
Chat models
GPT-5 Series
GPT-5 Series
- gpt-5.4 - Latest flagship model with max reasoning effort
- gpt-5.4-pro - Most advanced model with enhanced reasoning
- gpt-5.2 - Flagship with reasoning control
- gpt-5.2-codex - Optimized for code generation
- gpt-5.2-pro - Enhanced reasoning capabilities
- gpt-5.1 - Previous generation flagship
- gpt-5 - Balanced performance and cost
- gpt-5-mini - Fast and efficient
- gpt-5-nano - Lightweight model
o-Series (Reasoning Models)
o-Series (Reasoning Models)
- o4-mini - Latest compact reasoning model
- o3 - Advanced reasoning capabilities
- o3-mini - Efficient reasoning model
- o1 - First-generation reasoning model
- o1-mini - Compact reasoning model (no effort control)
Any valid OpenAI chat model ID is accepted. The models above are the ones with explicit capability handling in core-ai.
Embedding models
- text-embedding-3-large - 3072 dimensions, highest quality
- text-embedding-3-small - 1536 dimensions, faster and cheaper
- text-embedding-ada-002 - Legacy embedding model
Image models
- gpt-image-1 - Image generation model used throughout the docs examples
Examples
Basic chat
Reasoning with effort control
Embeddings
Image generation
Custom base URL
Reasoning support
Reasoning support depends on the selected model family:| Models | Supported effort levels |
|---|---|
gpt-5.4, gpt-5.4-pro, gpt-5.2, gpt-5.2-codex, gpt-5.2-pro | low, medium, high, max |
gpt-5.1 | low, medium, high |
gpt-5, gpt-5-mini, gpt-5-nano | minimal, low, medium, high |
o3, o3-mini, o4-mini, o1 | low, medium, high |
o1-mini | No effort control |
Reasoning metadata
When reasoning is enabled on the Responses API, core-ai automatically requests encrypted reasoning content and exposes it through provider metadata.Provider-specific options
Options are namespaced underopenai in providerOptions and validated with Zod schemas.
Generate options (Responses API)
store, serviceTier ('auto' | 'default' | 'flex' | 'scale' | 'priority'), include, parallelToolCalls, user.
Responses requests default to
store: false. If reasoning is enabled, core-ai also ensures reasoning.encrypted_content is included automatically.Generate options (Chat Completions API)
When usingcreateOpenAICompat, the available options differ:
Chat Completions uses
reasoning_effort instead of the Responses API reasoning payload shape. The compat options do not support the include field.Embed options
Image options
background, moderation, outputCompression, outputFormat, quality, responseFormat, style, user.
Error handling
Related
Anthropic Provider
Claude models with extended thinking
Google GenAI Provider
Gemini models with multimodal capabilities
core-ai Functions
Learn about generate, stream, and more