Overview
The Mistral provider gives you access to Mistral AI’s models for chat completion and embeddings, optimized for European languages and multilingual tasks.
Installation
npm install @core-ai/mistral
createMistral()
Create a Mistral provider instance.
import { createMistral } from '@core-ai/mistral' ;
const mistral = createMistral ({
apiKey: process . env . MISTRAL_API_KEY ,
});
Options
Your Mistral API key. Defaults to MISTRAL_API_KEY environment variable.
Custom base URL for API requests.
Provide your own configured Mistral client instance.
Returns
MistralProvider with methods chatModel() and embeddingModel().
Supported models
Chat models
mistral-large-2512 - Latest flagship model (Mistral Large 3)
mistral-large-2407 - Previous generation flagship
mistral-medium-2508 - Strong performance at lower cost (Mistral Medium 3.1)
magistral-medium-latest - Reasoning-capable Magistral model
mistral-small-2506 - Balanced small model
mistral-small - Quick responses
mistral-tiny - Ultra-fast, lightweight
codestral - Optimized for code generation
mixtral-8x7b - Mixture of experts architecture
mixtral-8x22b - Larger mixture of experts
Embedding models
mistral-embed - High-quality text embeddings
Capabilities
Feature Support Chat Completion Yes Streaming Yes Function Calling Yes Vision Yes Reasoning Output only Embeddings Yes Image Generation No
Mistral can return reasoning content from thinking-capable models, but it does not map reasoning.effort into the request. The reasoning option is accepted as a no-op.
Examples
Basic chat
import { createMistral } from '@core-ai/mistral' ;
import { generate } from '@core-ai/core-ai' ;
const mistral = createMistral ();
const result = await generate ({
model: mistral . chatModel ( 'mistral-large-2512' ),
messages: [
{ role: 'user' , content: 'Explain the concept of recursion' },
],
});
console . log ( result . content );
Streaming
import { stream } from '@core-ai/core-ai' ;
const chatStream = await stream ({
model: mistral . chatModel ( 'mistral-medium-2508' ),
messages: [
{ role: 'user' , content: 'Write a short story about artificial intelligence' },
],
});
for await ( const event of chatStream ) {
if ( event . type === 'text-delta' ) {
process . stdout . write ( event . text );
}
}
Reasoning output
const result = await generate ({
model: mistral . chatModel ( 'magistral-medium-latest' ),
messages: [
{ role: 'user' , content: 'Solve this step by step: if 3x + 7 = 22, what is x?' },
],
reasoning: {
effort: 'high' ,
},
});
console . log ( result . reasoning );
reasoning.effort is not sent to the Mistral API. Thinking-capable models decide their own reasoning behavior and the adapter extracts the resulting reasoning parts.
import { generate , defineTool } from '@core-ai/core-ai' ;
import { z } from 'zod' ;
const result = await generate ({
model: mistral . chatModel ( 'mistral-large-2512' ),
messages: [
{ role: 'user' , content: 'What is 25 multiplied by 4?' },
],
tools: {
calculator: defineTool ({
name: 'calculator' ,
description: 'Perform mathematical calculations' ,
parameters: z . object ({
operation: z . enum ([ 'add' , 'subtract' , 'multiply' , 'divide' ]),
a: z . number (),
b: z . number (),
}),
}),
},
});
Embeddings
import { embed } from '@core-ai/core-ai' ;
const result = await embed ({
model: mistral . embeddingModel ( 'mistral-embed' ),
input: 'Search query for semantic similarity' ,
});
console . log ( result . embeddings [ 0 ]);
Batch embeddings
const documents = [
'First document' ,
'Second document' ,
'Third document' ,
];
const result = await embed ({
model: mistral . embeddingModel ( 'mistral-embed' ),
input: documents ,
});
for ( const [ i , embedding ] of result . embeddings . entries ()) {
console . log ( `Document ${ i } : ${ embedding . length } dimensions` );
}
Code generation
const result = await generate ({
model: mistral . chatModel ( 'codestral' ),
messages: [
{
role: 'user' ,
content: 'Write a Python function to calculate the Fibonacci sequence using dynamic programming.' ,
},
],
});
console . log ( result . content );
Provider-specific options
Options are namespaced under mistral in providerOptions:
Generate options
const result = await generate ({
model: mistral . chatModel ( 'mistral-large-2512' ),
messages: [{ role: 'user' , content: 'Hello' }],
providerOptions: {
mistral: {
stopSequences: [ ' \n\n ' ],
frequencyPenalty: 0.5 ,
presencePenalty: 0.3 ,
randomSeed: 42 ,
parallelToolCalls: true ,
safePrompt: true ,
},
},
});
Available fields: stopSequences, frequencyPenalty, presencePenalty, randomSeed, parallelToolCalls, promptMode, safePrompt.
Use parallelToolCalls to enable or disable parallel tool execution when using multiple tools.
Embed options
const result = await embed ({
model: mistral . embeddingModel ( 'mistral-embed' ),
input: 'text' ,
providerOptions: {
mistral: {
encodingFormat: 'float' ,
},
},
});
Available fields: outputDtype ('float' | 'int8' | 'uint8' | 'binary' | 'ubinary'), encodingFormat ('float' | 'base64'), metadata.
Error handling
import { ProviderError } from '@core-ai/core-ai' ;
try {
const result = await generate ({
model: mistral . chatModel ( 'mistral-large-2512' ),
messages: [{ role: 'user' , content: 'Hello!' }],
});
} catch ( error ) {
if ( error instanceof ProviderError ) {
console . error ( 'Mistral API error:' , error . message );
console . error ( 'Status:' , error . statusCode );
}
}
OpenAI Provider GPT models with reasoning effort control
Anthropic Provider Claude models with extended thinking
Embeddings Guide Learn how to use embeddings