Skip to main content

Quickstart

Get up and running with Core AI in just a few minutes. This guide shows you how to create a simple chat completion using OpenAI’s GPT model.

Prerequisites

Before you begin, make sure you have:

Create your first chat completion

1

Set up your project

Create a new file called chat.ts in your project:
touch chat.ts
Make sure you have your API key set as an environment variable:
export OPENAI_API_KEY="your-api-key-here"
2

Import dependencies

Add the necessary imports to your chat.ts file:
chat.ts
import { generate } from '@core-ai/core-ai';
import { createOpenAI } from '@core-ai/openai';
The generate function handles chat completions, while createOpenAI initializes the OpenAI provider.
3

Initialize the provider and model

Create an OpenAI provider instance and select a chat model:
chat.ts
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const model = openai.chatModel('gpt-5-mini');
You can use any OpenAI model ID like gpt-5-mini, gpt-5, or gpt-o3-mini.
4

Generate a response

Call the generate function with your model and messages:
chat.ts
const result = await generate({
  model,
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain quantum computing in one sentence.' },
  ],
});

console.log(result.content);
console.log('Usage:', result.usage);
5

Run your code

Execute your script using tsx:
npx tsx chat.ts
You should see the AI’s response printed to the console along with token usage statistics.

Complete example

Here’s the full working example:
chat.ts
import { generate } from '@core-ai/core-ai';
import { createOpenAI } from '@core-ai/openai';

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const model = openai.chatModel('gpt-5-mini');

const result = await generate({
  model,
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain quantum computing in one sentence.' },
  ],
});

console.log('Response:', result.content);
console.log('Usage:', result.usage);
// Output:
// Response: Quantum computing uses quantum mechanical phenomena...
// Usage: { inputTokens: 25, outputTokens: 18, totalTokens: 43 }

Try streaming

Core AI makes streaming responses just as easy. Here’s how to stream text as it’s generated:
streaming.ts
import { stream } from '@core-ai/core-ai';
import { createOpenAI } from '@core-ai/openai';

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const model = openai.chatModel('gpt-5-mini');

const result = await stream({
  model,
  messages: [
    { role: 'user', content: 'Write a short haiku about TypeScript.' },
  ],
});

// Stream each text chunk as it arrives
for await (const event of result) {
  if (event.type === 'text-delta') {
    process.stdout.write(event.text);
  }
}

// Get the complete response with metadata
const response = await result.toResponse();
console.log('\nFinish reason:', response.finishReason);
console.log('Usage:', response.usage);
The toResponse() method aggregates the stream into a complete response object. You can call it after iterating through the stream.

Switch providers

One of Core AI’s key features is provider portability. Switch from OpenAI to Anthropic with just two lines:
import { generate } from '@core-ai/core-ai';
import { createOpenAI } from '@core-ai/openai';

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const model = openai.chatModel('gpt-5-mini');

const result = await generate({
  model,
  messages: [{ role: 'user', content: 'Hello!' }],
});

Next steps

Now that you have a working chat completion, explore more advanced features: