Skip to main content

Overview

The OpenAI provider gives you access to GPT-5 models, o-series reasoning models, embeddings, and image generation. By default, createOpenAI uses the Responses API. For the Chat Completions API, use createOpenAICompat from @core-ai/openai/compat.

Installation

npm install @core-ai/openai

createOpenAI()

Create an OpenAI provider instance using the Responses API.
import { createOpenAI } from '@core-ai/openai';

const openai = createOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

Options

apiKey
string
Your OpenAI API key. Defaults to OPENAI_API_KEY environment variable.
baseURL
string
Custom base URL for API requests. Useful for proxies or OpenAI-compatible APIs.
client
OpenAI
Provide your own configured OpenAI client instance.

Returns

OpenAIProvider with methods chatModel(), embeddingModel(), and imageModel().

createOpenAICompat()

Create an OpenAI provider instance using the Chat Completions API.
import { createOpenAICompat } from '@core-ai/openai/compat';

const openai = createOpenAICompat({
  apiKey: process.env.OPENAI_API_KEY,
});
Same options as createOpenAI. Returns OpenAICompatProvider with the same model factory methods. Use createOpenAICompat when you need Chat Completions API compatibility, for example with third-party OpenAI-compatible endpoints.

Provider methods

chatModel()

const model = openai.chatModel('gpt-5-mini');

embeddingModel()

const embeddings = openai.embeddingModel('text-embedding-3-large');

imageModel()

const imageGen = openai.imageModel('gpt-image-1');

Supported models

Chat models

  • gpt-5.4 - Latest flagship model with max reasoning effort
  • gpt-5.4-pro - Most advanced model with enhanced reasoning
  • gpt-5.2 - Flagship with reasoning control
  • gpt-5.2-codex - Optimized for code generation
  • gpt-5.2-pro - Enhanced reasoning capabilities
  • gpt-5.1 - Previous generation flagship
  • gpt-5 - Balanced performance and cost
  • gpt-5-mini - Fast and efficient
  • gpt-5-nano - Lightweight model
  • o4-mini - Latest compact reasoning model
  • o3 - Advanced reasoning capabilities
  • o3-mini - Efficient reasoning model
  • o1 - First-generation reasoning model
  • o1-mini - Compact reasoning model (no effort control)
Any valid OpenAI chat model ID is accepted. The models above are the ones with explicit capability handling in core-ai.

Embedding models

  • text-embedding-3-large - 3072 dimensions, highest quality
  • text-embedding-3-small - 1536 dimensions, faster and cheaper
  • text-embedding-ada-002 - Legacy embedding model

Image models

  • gpt-image-1 - Image generation model used throughout the docs examples

Examples

Basic chat

import { createOpenAI } from '@core-ai/openai';
import { generate } from '@core-ai/core-ai';

const openai = createOpenAI();

const result = await generate({
  model: openai.chatModel('gpt-5-mini'),
  messages: [
    { role: 'user', content: 'Explain quantum computing in simple terms' },
  ],
});

console.log(result.content);

Reasoning with effort control

const result = await generate({
  model: openai.chatModel('gpt-5.4'),
  messages: [
    { role: 'user', content: 'Solve this complex mathematical proof...' },
  ],
  reasoning: {
    effort: 'max',
  },
});

Embeddings

import { embed } from '@core-ai/core-ai';

const result = await embed({
  model: openai.embeddingModel('text-embedding-3-large'),
  input: 'Search query text',
});

console.log(result.embeddings[0]);

Image generation

import { generateImage } from '@core-ai/core-ai';

const result = await generateImage({
  model: openai.imageModel('gpt-image-1'),
  prompt: 'A futuristic city at sunset',
  size: '1024x1024',
});

console.log(result.images);

Custom base URL

const openai = createOpenAI({
  apiKey: 'your-api-key',
  baseURL: 'https://your-proxy.com/v1',
});

Reasoning support

Reasoning support depends on the selected model family:
ModelsSupported effort levels
gpt-5.4, gpt-5.4-pro, gpt-5.2, gpt-5.2-codex, gpt-5.2-prolow, medium, high, max
gpt-5.1low, medium, high
gpt-5, gpt-5-mini, gpt-5-nanominimal, low, medium, high
o3, o3-mini, o4-mini, o1low, medium, high
o1-miniNo effort control
GPT-5 family models throw a ProviderError if you set temperature or topP while reasoning is enabled.

Reasoning metadata

When reasoning is enabled on the Responses API, core-ai automatically requests encrypted reasoning content and exposes it through provider metadata.
import { generate, getProviderMetadata } from '@core-ai/core-ai';
import type { OpenAIReasoningMetadata } from '@core-ai/openai';

const result = await generate({
  model: openai.chatModel('gpt-5.4'),
  messages: [{ role: 'user', content: 'Think carefully before answering.' }],
  reasoning: { effort: 'high' },
});

for (const part of result.parts) {
  if (part.type !== 'reasoning') continue;

  const metadata = getProviderMetadata<OpenAIReasoningMetadata>(
    part.providerMetadata,
    'openai'
  );

  console.log(metadata?.encryptedContent);
}

Provider-specific options

Options are namespaced under openai in providerOptions and validated with Zod schemas.

Generate options (Responses API)

import { generate } from '@core-ai/core-ai';

const result = await generate({
  model: openai.chatModel('gpt-5-mini'),
  messages: [{ role: 'user', content: 'Hello' }],
  providerOptions: {
    openai: {
      store: true,
      serviceTier: 'auto',
      parallelToolCalls: true,
      user: 'user-123',
    },
  },
});
Available fields: store, serviceTier ('auto' | 'default' | 'flex' | 'scale' | 'priority'), include, parallelToolCalls, user.
Responses requests default to store: false. If reasoning is enabled, core-ai also ensures reasoning.encrypted_content is included automatically.

Generate options (Chat Completions API)

When using createOpenAICompat, the available options differ:
import { createOpenAICompat } from '@core-ai/openai/compat';
import { generate } from '@core-ai/core-ai';

const openai = createOpenAICompat();

const result = await generate({
  model: openai.chatModel('gpt-5-mini'),
  messages: [{ role: 'user', content: 'Hello' }],
  providerOptions: {
    openai: {
      store: true,
      serviceTier: 'auto',
      parallelToolCalls: true,
      stopSequences: ['\n\n'],
      frequencyPenalty: 0.5,
      presencePenalty: 0.3,
      seed: 42,
      user: 'user-123',
    },
  },
});
Chat Completions uses reasoning_effort instead of the Responses API reasoning payload shape. The compat options do not support the include field.

Embed options

const result = await embed({
  model: openai.embeddingModel('text-embedding-3-small'),
  input: 'text to embed',
  providerOptions: {
    openai: {
      encodingFormat: 'float',
      user: 'user-123',
    },
  },
});

Image options

const result = await generateImage({
  model: openai.imageModel('gpt-image-1'),
  prompt: 'A cat',
  providerOptions: {
    openai: {
      quality: 'hd',
      style: 'vivid',
      responseFormat: 'url',
      background: 'auto',
      outputFormat: 'png',
    },
  },
});
Available fields: background, moderation, outputCompression, outputFormat, quality, responseFormat, style, user.

Error handling

import { ProviderError } from '@core-ai/core-ai';

try {
  const result = await generate({
    model: openai.chatModel('gpt-5-mini'),
    messages: [{ role: 'user', content: 'Hello!' }],
  });
} catch (error) {
  if (error instanceof ProviderError) {
    console.error('OpenAI API error:', error.message);
    console.error('Status:', error.statusCode);
  }
}

Anthropic Provider

Claude models with extended thinking

Google GenAI Provider

Gemini models with multimodal capabilities

core-ai Functions

Learn about generate, stream, and more