Overview
Core AI supports three types of models, each designed for specific tasks:
- Chat Models: Generate text responses, support conversations with tool calling
- Embedding Models: Convert text into vector representations for semantic search
- Image Models: Generate images from text prompts
Chat Models
Chat models are the most versatile, supporting text generation, conversations, tool calling, and structured output.
Interface
type ChatModel = {
readonly provider: string;
readonly modelId: string;
generate(options: GenerateOptions): Promise<GenerateResult>;
stream(options: GenerateOptions): Promise<StreamResult>;
generateObject<TSchema extends z.ZodType>(
options: GenerateObjectOptions<TSchema>
): Promise<GenerateObjectResult<TSchema>>;
streamObject<TSchema extends z.ZodType>(
options: StreamObjectOptions<TSchema>
): Promise<StreamObjectResult<TSchema>>;
};
Basic Text Generation
import { createOpenAI } from '@core-ai/openai';
import { generate } from '@core-ai/core-ai';
const openai = createOpenAI();
const model = openai.chatModel('gpt-4-turbo');
const result = await generate({
model,
messages: [
{ role: 'user', content: 'Explain quantum computing in one sentence.' }
],
});
console.log(result.content);
// "Quantum computing uses quantum bits that can exist in multiple states..."
Streaming Responses
import { stream } from '@core-ai/core-ai';
const response = await stream({
model,
messages: [
{ role: 'user', content: 'Write a short story.' }
],
});
for await (const event of response) {
if (event.type === 'text-delta') {
process.stdout.write(event.text);
}
}
Structured Output
Generate type-safe structured data using Zod schemas:
import { z } from 'zod';
import { generateObject } from '@core-ai/core-ai';
const schema = z.object({
name: z.string(),
age: z.number(),
hobbies: z.array(z.string()),
});
const result = await model.generateObject({
messages: [
{ role: 'user', content: 'Generate a random person profile.' }
],
schema,
schemaName: 'Person',
});
console.log(result.object);
// { name: "Alice Smith", age: 28, hobbies: ["reading", "hiking"] }
Extend model capabilities with function tools:
import { defineTool } from '@core-ai/core-ai';
import { z } from 'zod';
const tools = {
getWeather: defineTool({
name: 'getWeather',
description: 'Get current weather for a location',
parameters: z.object({
location: z.string().describe('City name'),
unit: z.enum(['celsius', 'fahrenheit']).optional(),
}),
}),
};
const result = await generate({
model,
messages: [
{ role: 'user', content: 'What\'s the weather in Paris?' }
],
tools,
});
if (result.toolCalls.length > 0) {
console.log(result.toolCalls[0]);
// { id: "call_123", name: "getWeather", arguments: { location: "Paris" } }
}
Generate Result
type GenerateResult = {
parts: AssistantContentPart[]; // All content parts (text, reasoning, tool calls)
content: string | null; // Concatenated text content
reasoning: string | null; // Extended thinking/reasoning if available
toolCalls: ToolCall[]; // Tool calls requested by the model
finishReason: FinishReason; // Why generation stopped
usage: ChatUsage; // Token usage information
};
type FinishReason =
| 'stop' // Natural completion
| 'length' // Hit token limit
| 'tool-calls' // Requested tool calls
| 'content-filter' // Blocked by content filter
| 'unknown';
Stream Events
type StreamEvent =
| { type: 'reasoning-start' }
| { type: 'reasoning-delta'; text: string }
| { type: 'reasoning-end' }
| { type: 'text-delta'; text: string }
| { type: 'tool-call-start'; toolCallId: string; toolName: string }
| { type: 'tool-call-delta'; toolCallId: string; argumentsDelta: string }
| { type: 'tool-call-end'; toolCall: ToolCall }
| { type: 'finish'; finishReason: FinishReason; usage: ChatUsage };
Embedding Models
Embedding models convert text into numerical vectors for semantic similarity and search.
Interface
type EmbeddingModel = {
readonly provider: string;
readonly modelId: string;
embed(options: EmbedOptions): Promise<EmbedResult>;
};
Basic Usage
import { embed } from '@core-ai/core-ai';
import { createOpenAI } from '@core-ai/openai';
const openai = createOpenAI();
const model = openai.embeddingModel('text-embedding-3-small');
const result = await embed({
model,
input: 'The quick brown fox jumps over the lazy dog',
});
console.log(result.embeddings[0].length);
// 1536 (dimensions)
Batch Embedding
const result = await embed({
model,
input: [
'First document',
'Second document',
'Third document',
],
});
console.log(result.embeddings.length);
// 3
Custom Dimensions
const result = await embed({
model,
input: 'Sample text',
dimensions: 256, // Reduce dimensions for faster search
});
Embed Result
type EmbedResult = {
embeddings: number[][]; // Array of embedding vectors
usage?: EmbeddingUsage; // Optional token usage (provider-dependent)
};
type EmbeddingUsage = {
inputTokens: number;
};
Image Models
Image models generate images from text descriptions.
Interface
type ImageModel = {
readonly provider: string;
readonly modelId: string;
generate(options: ImageGenerateOptions): Promise<ImageGenerateResult>;
};
Basic Usage
import { generateImage } from '@core-ai/core-ai';
import { createOpenAI } from '@core-ai/openai';
const openai = createOpenAI();
const model = openai.imageModel('dall-e-3');
const result = await generateImage({
model,
prompt: 'A futuristic city at sunset with flying cars',
});
console.log(result.images[0].url);
// "https://..."
Generate Options
type ImageGenerateOptions = {
prompt: string; // Text description of desired image
n?: number; // Number of images to generate
size?: string; // Image size (e.g., "1024x1024")
providerOptions?: Record<string, unknown>; // Provider-specific options
};
Multiple Images
const result = await generateImage({
model,
prompt: 'Abstract art with geometric shapes',
n: 4, // Generate 4 variations
size: '512x512',
});
console.log(result.images.length);
// 4
Image Result
type ImageGenerateResult = {
images: GeneratedImage[];
};
type GeneratedImage = {
base64?: string; // Base64-encoded image data
url?: string; // URL to hosted image
revisedPrompt?: string; // Provider-revised prompt (e.g., DALL-E)
};
Different providers may return images as URLs, base64 data, or both. Check the provider documentation for specific behavior.
Model Properties
All models expose two readonly properties:
const model = openai.chatModel('gpt-4-turbo');
console.log(model.provider); // "openai"
console.log(model.modelId); // "gpt-4-turbo"
These properties are useful for logging, debugging, and tracking which models are used in your application.
Next Steps