# core-ai ## Docs - [defineTool()](https://docs.core-ai.dev/api/core/define-tool.md): Define tools that language models can call - [embed()](https://docs.core-ai.dev/api/core/embed.md): Generate embeddings for text using embedding models - [Errors](https://docs.core-ai.dev/api/core/errors.md): Error classes for handling core-ai and provider failures - [generate()](https://docs.core-ai.dev/api/core/generate.md): Generate a single response from a chat model - [generateImage()](https://docs.core-ai.dev/api/core/generate-image.md): Generate images from text prompts using image models - [generateObject()](https://docs.core-ai.dev/api/core/generate-object.md): Generate structured objects validated against Zod schemas - [stream()](https://docs.core-ai.dev/api/core/stream.md): Stream responses from a chat model in real-time - [streamObject()](https://docs.core-ai.dev/api/core/stream-object.md): Stream structured objects validated against Zod schemas in real-time - [Types](https://docs.core-ai.dev/api/core/types.md): Core type definitions for messages, models, and usage tracking - [Utilities](https://docs.core-ai.dev/api/core/utilities.md): Helper functions for schemas, messages, streams, middleware wrapping, and provider metadata - [Anthropic Provider](https://docs.core-ai.dev/api/providers/anthropic.md): Create and configure the Anthropic provider for Claude models with extended thinking - [Google GenAI Provider](https://docs.core-ai.dev/api/providers/google-genai.md): Create and configure the Google GenAI provider for Gemini models with multimodal capabilities - [Mistral Provider](https://docs.core-ai.dev/api/providers/mistral.md): Create and configure the Mistral AI provider for chat and embeddings - [OpenAI Provider](https://docs.core-ai.dev/api/providers/openai.md): Create and configure the OpenAI provider for chat, embeddings, and image generation - [Releases](https://docs.core-ai.dev/changelog.md): Release history for all core-ai packages. - [Configuration](https://docs.core-ai.dev/concepts/configuration.md): Generation options for controlling temperature, tokens, and model behavior - [Error Handling](https://docs.core-ai.dev/concepts/error-handling.md): CoreAIError, ProviderError, and structured output error types in core-ai - [Messages](https://docs.core-ai.dev/concepts/messages.md): Understanding message types, content parts, and multi-modal inputs in core-ai - [Middleware](https://docs.core-ai.dev/concepts/middleware.md): Extend model behavior with composable hooks for logging, validation, retries, and more - [Models](https://docs.core-ai.dev/concepts/models.md): Understanding chat models, embedding models, and image models in core-ai - [Providers](https://docs.core-ai.dev/concepts/providers.md): Learn how providers work in core-ai and how to use OpenAI, Anthropic, Google GenAI, and Mistral - [Examples Overview](https://docs.core-ai.dev/examples/overview.md): Explore practical examples demonstrating core-ai features and capabilities - [Chat Completion](https://docs.core-ai.dev/guides/chat-completion.md): Generate text responses using the generate() function - [Embeddings](https://docs.core-ai.dev/guides/embeddings.md): Generate vector embeddings using embed() for semantic search and similarity - [Image Generation](https://docs.core-ai.dev/guides/image-generation.md): Generate images using generateImage() with AI image models - [Multi-Modal](https://docs.core-ai.dev/guides/multi-modal.md): Work with images, files, and multi-part messages in chat conversations - [Streaming](https://docs.core-ai.dev/guides/streaming.md): Stream responses in real-time using stream() with async iteration - [Structured Outputs](https://docs.core-ai.dev/guides/structured-outputs.md): Generate type-safe JSON with generateObject() and streamObject() using Zod schemas - [Tool Calling](https://docs.core-ai.dev/guides/tool-calling.md): Let models use external tools and functions with defineTool() - [Installation](https://docs.core-ai.dev/installation.md): Install core-ai with your preferred package manager - [Introduction](https://docs.core-ai.dev/introduction.md): A type-safe abstraction layer over LLM provider SDKs for TypeScript - [Langfuse](https://docs.core-ai.dev/observability/langfuse.md): Track model generations and usage with Langfuse observability - [OpenTelemetry](https://docs.core-ai.dev/observability/opentelemetry.md): Trace model operations with OpenTelemetry spans - [Observability](https://docs.core-ai.dev/observability/overview.md): Trace and monitor model operations with OpenTelemetry or Langfuse - [Quickstart](https://docs.core-ai.dev/quickstart.md): Build your first AI chat completion in 2 minutes