Overview
The Mistral provider gives you access to Mistral AI’s models for chat completion and embeddings, optimized for European languages and multilingual tasks.Installation
createMistral()
Create a Mistral provider instance.Options
Your Mistral API key. Defaults to
MISTRAL_API_KEY environment variable.Custom base URL for API requests. Useful for proxies or self-hosted deployments.
Provide your own configured Mistral client instance.
Returns
MistralProvider - Provider instance with methods to create models.
Provider Methods
chatModel()
Create a chat model instance.Model identifier. See Supported Models below.
embeddingModel()
Create an embedding model instance.Embedding model identifier.
Supported Models
Chat Models
Large Models
Large Models
Flagship models for complex tasks.
- mistral-large-2 - Latest flagship model
- mistral-large - Previous generation flagship
Medium Models
Medium Models
Balanced performance and efficiency.
- mistral-medium - Strong performance at lower cost
Small Models
Small Models
Fast and efficient for simpler tasks.
- mistral-small - Quick responses
- mistral-tiny - Ultra-fast, lightweight
Specialized Models
Specialized Models
Purpose-built for specific use cases.
- codestral - Optimized for code generation
- mixtral-8x7b - Mixture of experts architecture
- mixtral-8x22b - Larger mixture of experts
Embedding Models
- mistral-embed - High-quality text embeddings
Capabilities
| Feature | Support |
|---|---|
| Chat Completion | ✓ |
| Streaming | ✓ |
| Function Calling | ✓ |
| Vision | Limited |
| Reasoning Effort | ✗ |
| Embeddings | ✓ |
| Image Generation | ✗ |
Mistral models do not support explicit reasoning effort control like OpenAI or Anthropic.
Examples
Basic Chat
Multilingual Chat
Streaming
Code Generation
Function Calling
Embeddings
Batch Embeddings
Custom Base URL
Conversation
Error Handling
Best Practices
Model Selection
Model Selection
- mistral-large-2 - Complex reasoning, analysis, creative writing
- mistral-medium - General purpose, good balance
- mistral-small - Simple queries, high throughput
- codestral - Code generation, technical documentation
- mixtral-8x22b - When you need the best quality
Multilingual Usage
Multilingual Usage
- Mistral models have strong support for European languages
- Particularly good for French, German, Spanish, Italian
- Works well for code-switching between languages
Performance Optimization
Performance Optimization
- Use smaller models for simple tasks to reduce latency and cost
- Enable streaming for long-form content
- Batch embeddings when processing multiple documents
Model Comparison
| Model | Parameters | Best For | Speed |
|---|---|---|---|
| mistral-large-2 | Large | Complex reasoning | Slower |
| mistral-medium | Medium | General purpose | Medium |
| mistral-small | Small | Simple queries | Fast |
| codestral | Specialized | Code generation | Medium |
| mixtral-8x22b | 141B (MoE) | Highest quality | Slower |
| mixtral-8x7b | 46.7B (MoE) | Good balance | Medium |
Use Cases
Code Generation
Use Codestral for implementing functions, debugging, and technical documentation.
Multilingual Support
Leverage strong European language support for international applications.
Semantic Search
Use embeddings for document search and similarity matching.
Content Generation
Generate articles, summaries, and creative content with Large models.