Overview
The Google GenAI provider gives you access to Gemini models with advanced multimodal capabilities, embeddings, and image generation through Imagen.
Installation
npm install @core-ai/google-genai
createGoogleGenAI()
Create a Google GenAI provider instance.
import { createGoogleGenAI } from '@core-ai/google-genai' ;
const google = createGoogleGenAI ({
apiKey: process . env . GOOGLE_API_KEY ,
});
Options
Your Google AI API key. Pass it explicitly or set the GOOGLE_API_KEY environment variable.
API version to use (e.g., 'v1', 'v1beta').
Custom base URL for API requests.
Provide your own configured Google GenAI client instance.
Returns
GoogleGenAIProvider with methods chatModel(), embeddingModel(), and imageModel().
Supported models
Chat models
Gemini 3.x (thinking level)
gemini-3.1-pro - Most capable multimodal model
gemini-3.1-flash-lite-preview - Cost-efficient model with thinking level control
gemini-3-pro - Previous Gemini 3 generation
Gemini 2.5 (thinking budget)
gemini-2.5-pro - High capability, budget-based thinking
gemini-2.5-flash - Fast with optional thinking
gemini-2.5-flash-lite - Lightweight with optional thinking
Embedding models
text-embedding-004 - Latest text embedding model
Image models
imagen-3.0 - Latest image generation model
Examples
Basic chat
import { createGoogleGenAI } from '@core-ai/google-genai' ;
import { generate } from '@core-ai/core-ai' ;
const google = createGoogleGenAI ();
const result = await generate ({
model: google . chatModel ( 'gemini-3.1-pro' ),
messages: [
{ role: 'user' , content: 'Explain machine learning' },
],
});
console . log ( result . content );
Reasoning
const result = await generate ({
model: google . chatModel ( 'gemini-3.1-pro' ),
messages: [
{ role: 'user' , content: 'Analyze this complex scenario...' },
],
reasoning: {
effort: 'high' ,
},
});
const result = await generate ({
model: google . chatModel ( 'gemini-3.1-pro' ),
messages: [
{
role: 'user' ,
content: [
{ type: 'text' , text: 'Describe this image' },
{
type: 'image' ,
source: {
type: 'url' ,
url: 'https://example.com/photo.jpg' ,
},
},
],
},
],
});
Embeddings
import { embed } from '@core-ai/core-ai' ;
const result = await embed ({
model: google . embeddingModel ( 'text-embedding-004' ),
input: 'Text to embed' ,
});
console . log ( result . embeddings [ 0 ]);
Image generation
import { generateImage } from '@core-ai/core-ai' ;
const result = await generateImage ({
model: google . imageModel ( 'imagen-3.0' ),
prompt: 'A serene mountain landscape at dawn' ,
n: 1 ,
});
console . log ( result . images );
Streaming
import { stream } from '@core-ai/core-ai' ;
const chatStream = await stream ({
model: google . chatModel ( 'gemini-2.5-flash' ),
messages: [
{ role: 'user' , content: 'Write a story about...' },
],
});
for await ( const event of chatStream ) {
if ( event . type === 'text-delta' ) {
process . stdout . write ( event . text );
}
}
import { generate , defineTool } from '@core-ai/core-ai' ;
import { z } from 'zod' ;
const result = await generate ({
model: google . chatModel ( 'gemini-3.1-pro' ),
messages: [
{ role: 'user' , content: 'Calculate the area of a circle with radius 5' },
],
tools: {
calculateArea: defineTool ({
name: 'calculateArea' ,
description: 'Calculate circle area' ,
parameters: z . object ({
radius: z . number (),
}),
}),
},
});
Thinking modes
Thinking level (Gemini 3.x)
Gemini 3.x models use HIGH/LOW thinking control. Mapping: minimal/low/medium -> LOW, high/max -> HIGH.
Gemini 3 models cannot disable thinking completely.
Thinking budget (Gemini 2.5)
Token budgets: minimal -> 1,024, low -> 4,096, medium -> 16,384, high/max -> 32,768.
gemini-2.5-flash and gemini-2.5-flash-lite can skip thinking by omitting the reasoning parameter.
When reasoning is enabled, Google GenAI reasoning parts include a thought signature for multi-turn fidelity. Use getProviderMetadata to access it in a type-safe way.
import { generate , getProviderMetadata } from '@core-ai/core-ai' ;
import type { GoogleReasoningMetadata } from '@core-ai/google-genai' ;
const result = await generate ({
model: google . chatModel ( 'gemini-3.1-pro' ),
messages: [{ role: 'user' , content: 'Think step by step.' }],
reasoning: { effort: 'high' },
});
for ( const part of result . parts ) {
if ( part . type !== 'reasoning' ) continue ;
const metadata = getProviderMetadata < GoogleReasoningMetadata >(
part . providerMetadata ,
'google'
);
console . log ( metadata ?. thoughtSignature );
}
The GoogleReasoningMetadata type contains:
thoughtSignature — signature for preserving thought context across multi-turn conversations
Provider-specific options
Options are namespaced under google in providerOptions:
Generate options
const result = await generate ({
model: google . chatModel ( 'gemini-3.1-pro' ),
messages: [{ role: 'user' , content: 'Hello' }],
providerOptions: {
google: {
stopSequences: [ ' \n\n ' ],
frequencyPenalty: 0.5 ,
presencePenalty: 0.3 ,
seed: 42 ,
topK: 40 ,
},
},
});
Embed options
const result = await embed ({
model: google . embeddingModel ( 'text-embedding-004' ),
input: 'text' ,
providerOptions: {
google: {
taskType: 'RETRIEVAL_DOCUMENT' ,
title: 'My Document' ,
},
},
});
Available fields: taskType, title, mimeType, autoTruncate.
Image options
const result = await generateImage ({
model: google . imageModel ( 'imagen-3.0' ),
prompt: 'A serene landscape' ,
providerOptions: {
google: {
aspectRatio: '16:9' ,
negativePrompt: 'blurry, low quality' ,
guidanceScale: 7.5 ,
seed: 42 ,
safetyFilterLevel: 'BLOCK_MEDIUM_AND_ABOVE' ,
personGeneration: 'ALLOW_ADULT' ,
},
},
});
Available fields: outputGcsUri, negativePrompt, aspectRatio, guidanceScale, seed, safetyFilterLevel ('BLOCK_LOW_AND_ABOVE' | 'BLOCK_MEDIUM_AND_ABOVE' | 'BLOCK_ONLY_HIGH' | 'BLOCK_NONE'), personGeneration ('DONT_ALLOW' | 'ALLOW_ADULT' | 'ALLOW_ALL'), includeSafetyAttributes, includeRaiReason, language, outputMimeType, outputCompressionQuality, addWatermark, labels, imageSize, enhancePrompt.
Error handling
import { ProviderError } from '@core-ai/core-ai' ;
try {
const result = await generate ({
model: google . chatModel ( 'gemini-3.1-pro' ),
messages: [{ role: 'user' , content: 'Hello!' }],
});
} catch ( error ) {
if ( error instanceof ProviderError ) {
console . error ( 'Google AI API error:' , error . message );
console . error ( 'Status:' , error . statusCode );
}
}
Model comparison
Model Thinking Control Can Disable Best For Gemini 3.1 Pro Level (HIGH/LOW) No Complex multimodal Gemini 3.1 Flash Lite Preview Level (HIGH/LOW) No Cost-efficient with thinking Gemini 3 Pro Level (HIGH/LOW) No Previous generation multimodal Gemini 2.5 Pro Budget (tokens) No Controlled reasoning Gemini 2.5 Flash Budget (tokens) Yes Fast + flexible Gemini 2.5 Flash Lite Budget (tokens) Yes Lightweight tasks
OpenAI Provider GPT models and image generation
Anthropic Provider Claude models with extended thinking
Multi-Modal Guide Learn how to work with vision and audio