Skip to main content

Overview

The Mistral provider gives you access to Mistral AI’s models for chat completion and embeddings, optimized for European languages and multilingual tasks.

Installation

npm install @core-ai/mistral

createMistral()

Create a Mistral provider instance.
import { createMistral } from '@core-ai/mistral';

const mistral = createMistral({
  apiKey: process.env.MISTRAL_API_KEY
});

Options

apiKey
string
Your Mistral API key. Defaults to MISTRAL_API_KEY environment variable.
baseURL
string
Custom base URL for API requests. Useful for proxies or self-hosted deployments.
client
Mistral
Provide your own configured Mistral client instance.

Returns

MistralProvider - Provider instance with methods to create models.

Provider Methods

chatModel()

Create a chat model instance.
const model = mistral.chatModel('mistral-large-2');
modelId
string
required
Model identifier. See Supported Models below.

embeddingModel()

Create an embedding model instance.
const embeddings = mistral.embeddingModel('mistral-embed');
modelId
string
required
Embedding model identifier.

Supported Models

Chat Models

Flagship models for complex tasks.
  • mistral-large-2 - Latest flagship model
  • mistral-large - Previous generation flagship
Balanced performance and efficiency.
  • mistral-medium - Strong performance at lower cost
Fast and efficient for simpler tasks.
  • mistral-small - Quick responses
  • mistral-tiny - Ultra-fast, lightweight
Purpose-built for specific use cases.
  • codestral - Optimized for code generation
  • mixtral-8x7b - Mixture of experts architecture
  • mixtral-8x22b - Larger mixture of experts

Embedding Models

  • mistral-embed - High-quality text embeddings

Capabilities

FeatureSupport
Chat Completion
Streaming
Function Calling
VisionLimited
Reasoning Effort
Embeddings
Image Generation
Mistral models do not support explicit reasoning effort control like OpenAI or Anthropic.

Examples

Basic Chat

import { createMistral } from '@core-ai/mistral';
import { generateText } from '@core-ai/core-ai';

const mistral = createMistral({
  apiKey: process.env.MISTRAL_API_KEY
});

const result = await generateText({
  model: mistral.chatModel('mistral-large-2'),
  prompt: 'Explain the concept of recursion'
});

console.log(result.text);

Multilingual Chat

// Mistral models excel at multilingual tasks
const result = await generateText({
  model: mistral.chatModel('mistral-large-2'),
  prompt: 'Expliquez la théorie de la relativité en français'
});

console.log(result.text);

Streaming

import { streamText } from '@core-ai/core-ai';

const stream = await streamText({
  model: mistral.chatModel('mistral-medium'),
  prompt: 'Write a short story about artificial intelligence'
});

for await (const chunk of stream) {
  if (chunk.type === 'text') {
    process.stdout.write(chunk.text);
  }
}

Code Generation

// Use Codestral for code tasks
const result = await generateText({
  model: mistral.chatModel('codestral'),
  prompt: `
    Write a Python function to calculate the Fibonacci sequence
    up to n terms using dynamic programming.
  `
});

console.log(result.text);

Function Calling

import { generateText, tool } from '@core-ai/core-ai';
import { z } from 'zod';

const result = await generateText({
  model: mistral.chatModel('mistral-large-2'),
  prompt: 'What is 25 multiplied by 4?',
  tools: {
    calculator: tool({
      description: 'Perform mathematical calculations',
      parameters: z.object({
        operation: z.enum(['add', 'subtract', 'multiply', 'divide']),
        a: z.number(),
        b: z.number()
      }),
      execute: async ({ operation, a, b }) => {
        const ops = {
          add: a + b,
          subtract: a - b,
          multiply: a * b,
          divide: a / b
        };
        return { result: ops[operation] };
      }
    })
  }
});

Embeddings

import { embed } from '@core-ai/core-ai';

const result = await embed({
  model: mistral.embeddingModel('mistral-embed'),
  value: 'Search query for semantic similarity'
});

console.log(result.embedding); // number[]

Batch Embeddings

import { embedMany } from '@core-ai/core-ai';

const documents = [
  'First document',
  'Second document',
  'Third document'
];

const results = await embedMany({
  model: mistral.embeddingModel('mistral-embed'),
  values: documents
});

for (const result of results) {
  console.log(result.embedding);
}

Custom Base URL

// Use with self-hosted or proxy endpoints
const mistral = createMistral({
  apiKey: 'your-api-key',
  baseURL: 'https://your-proxy.com/v1'
});

Conversation

const result = await generateText({
  model: mistral.chatModel('mistral-large-2'),
  messages: [
    {
      role: 'user',
      content: 'What is machine learning?'
    },
    {
      role: 'assistant',
      content: 'Machine learning is a subset of AI...'
    },
    {
      role: 'user',
      content: 'Can you give me an example?'
    }
  ]
});

Error Handling

import { APIError } from '@core-ai/core-ai';

try {
  const result = await generateText({
    model: mistral.chatModel('mistral-large-2'),
    prompt: 'Hello!'
  });
} catch (error) {
  if (error instanceof APIError) {
    console.error('Mistral API error:', error.message);
    console.error('Status:', error.statusCode);
  }
}

Best Practices

  • mistral-large-2 - Complex reasoning, analysis, creative writing
  • mistral-medium - General purpose, good balance
  • mistral-small - Simple queries, high throughput
  • codestral - Code generation, technical documentation
  • mixtral-8x22b - When you need the best quality
  • Mistral models have strong support for European languages
  • Particularly good for French, German, Spanish, Italian
  • Works well for code-switching between languages
  • Use smaller models for simple tasks to reduce latency and cost
  • Enable streaming for long-form content
  • Batch embeddings when processing multiple documents

Model Comparison

ModelParametersBest ForSpeed
mistral-large-2LargeComplex reasoningSlower
mistral-mediumMediumGeneral purposeMedium
mistral-smallSmallSimple queriesFast
codestralSpecializedCode generationMedium
mixtral-8x22b141B (MoE)Highest qualitySlower
mixtral-8x7b46.7B (MoE)Good balanceMedium

Use Cases

Code Generation

Use Codestral for implementing functions, debugging, and technical documentation.

Multilingual Support

Leverage strong European language support for international applications.

Semantic Search

Use embeddings for document search and similarity matching.

Content Generation

Generate articles, summaries, and creative content with Large models.