Skip to main content
This page provides an overview of all available examples in the Core AI repository. Each example is a complete, runnable TypeScript file that demonstrates specific features and use cases.

Running Examples

All examples are located in the examples/ directory of the repository.

Prerequisites

  • Node.js 18+
  • Dependencies installed from the repository root:
npm install

Environment Variables

Create a .env file at the repository root:
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
GOOGLE_API_KEY=your_google_api_key
MISTRAL_API_KEY=your_mistral_api_key
You only need API keys for the providers you plan to use. Each example specifies which key it requires.

Run an Example

From the repository root:
npx tsx examples/01-chat-completion.ts

Basic Features

Advanced Features

Provider Examples

Core AI supports multiple AI providers with the same unified API.

Example Categories

1. Getting Started

Start with these examples if you’re new to Core AI:
  • 01-chat-completion.ts - Your first chat completion
  • 02-streaming.ts - Real-time streaming responses
  • 07-error-handling.ts - Proper error handling patterns

2. Advanced Capabilities

Explore powerful features:
  • 03-tool-calling.ts - Enable models to call functions
  • 05-embeddings.ts - Generate vector embeddings
  • 06-image-generation.ts - Create images from text
  • 04-multi-modal.ts - Process text and images together

3. Structured Data

Work with typed, validated outputs:
  • 11-generate-object.ts - Generate structured JSON
  • 12-stream-object.ts - Stream structured data

4. Multi-Provider

Experience true provider portability:
  • 08-anthropic-provider.ts - Claude models
  • 09-google-genai-provider.ts - Gemini models
  • 10-mistral-provider.ts - Mistral models

Next Steps