@core-ai/core-ai
- Refactor internal chat/image wrapper plumbing and stream event reduction logic to reduce duplication and improve readability without changing public behavior.
@core-ai/openai
- Add model capability support for
gpt-5.4andgpt-5.4-pro. - Refactor chat adapter internals to reduce duplicated stream and part-aggregation logic, and consistently report
finishReason: 'tool-calls'when a function call is emitted from the stream.
@core-ai/google-genai
- Add model capability support for
gemini-3.1-proandgemini-3.1-flash-lite-preview. - Refactor adapter internals to reduce duplication and simplify stream/request helper logic without changing runtime behavior.
@core-ai/mistral
- Refactor adapter internals to reduce duplication and simplify stream/request helper logic without changing runtime behavior.
@core-ai/core-ai
- Replace
ModelConfigwith flat sampling fields (temperature,maxTokens,topP) on generate options. Introduce method-specific typed provider option interfaces (GenerateProviderOptions,EmbedProviderOptions,ImageProviderOptions) that providers extend via declaration merging. - Restructure reasoning
providerMetadatato use provider-namespaced keys. Adapters now detect cross-provider reasoning blocks and downgrade them to plain text instead of forwarding opaque metadata. AddgetProviderMetadatahelper. - Redesign chat and object streaming around replayable stream handles with
resultandevents, rename the handle types toChatStreamandObjectStream, and accept caller-providedAbortSignals for cancellation.
@core-ai/openai
- Migrate default chat models to the OpenAI Responses API and add a
@core-ai/openai/compatentrypoint for Chat Completions compatibility. - Namespace provider options under
openaikey with Zod validation. Responses API options:store,serviceTier,include,parallelToolCalls,user. Compat options:stopSequences,frequencyPenalty,presencePenalty,seed. Embed options:encodingFormat,user. Image options:background,moderation,outputCompression,outputFormat,quality,responseFormat,style,user. - Update streaming adapters to expose replayable stream handles using
ChatStreamandObjectStream.
@core-ai/anthropic
- Namespace provider options under
anthropickey with Zod validation. Generate options:topK,stopSequences,betas,outputConfig. - Restructure reasoning
providerMetadatato use provider-namespaced keys. - Update streaming adapters to expose replayable stream handles using
ChatStreamandObjectStream.
@core-ai/google-genai
- Namespace provider options under
googlekey with strict Zod validation. Generate options:stopSequences,frequencyPenalty,presencePenalty,seed,topK. Embed options:taskType,title,mimeType,autoTruncate. Image options:aspectRatio,personGeneration,safetyFilterLevel,negativePrompt,guidanceScale,seed. - Restructure reasoning
providerMetadatato use provider-namespaced keys. - Update streaming adapters to expose replayable stream handles using
ChatStreamandObjectStream.
@core-ai/mistral
- Namespace provider options under
mistralkey with Zod validation. Generate options:stopSequences,frequencyPenalty,presencePenalty,randomSeed,parallelToolCalls,promptMode,safePrompt. Embed options:outputDtype,encodingFormat,metadata. - Restructure reasoning
providerMetadatato use provider-namespaced keys. - Update streaming adapters to expose replayable stream handles using
ChatStreamandObjectStream.
All packages
- Fix release publish race: remove
prepublishOnlyto avoid concurrent tsup builds failing to resolve@core-ai/core-ai.
@core-ai/core-ai
Add unified reasoning/thinking support with effort-based configuration.Breaking changes:AssistantMessage:contentandtoolCallsfields replaced byparts: AssistantContentPart[]arrayStreamEvent:content-deltarenamed totext-delta, newreasoning-start,reasoning-delta,reasoning-endevents addedGenerateResult: adds requiredpartsandreasoningfieldsChatOutputTokenDetails.reasoningTokens: changed fromnumberto optional
ReasoningEffort, ReasoningConfig, AssistantContentPart, ReasoningPartNew utilities: resultToMessage() for multi-turn reasoning state preservation, assistantMessage() for convenient message constructionNew option: reasoning?: ReasoningConfig on GenerateOptions, GenerateObjectOptions, StreamObjectOptions@core-ai/openai
- Add reasoning support for OpenAI models (Chat Completions API). Maps unified
reasoning.efforttoreasoning_effortwith model-aware clamping. Extracts reasoning content from responses and streams. Validates parameter restrictions for GPT-5.1+ models.
@core-ai/anthropic
- Add reasoning support for Anthropic models with adaptive and manual thinking modes. Maps unified
reasoning.effortto adaptive effort levels or manualbudget_tokensbased on model capabilities. Extracts thinking and redacted thinking blocks with signature preservation for multi-turn fidelity.
@core-ai/google-genai
- Add reasoning support for Google GenAI models. Maps unified
reasoning.efforttothinkingLevelfor Gemini 3 orthinkingBudgetfor Gemini 2.5 based on model capabilities. Extracts thought content with thought signature preservation for multi-turn fidelity.
@core-ai/mistral
- Add reasoning support for Mistral Magistral models. Extracts thinking chunks from response content and streams as reasoning events.
@core-ai/core-ai
Refactor the coreChatUsage contract to nested detail objects for input and output token accounting.Breaking changes:- Remove
usage.totalTokens - Move
usage.reasoningTokenstousage.outputTokenDetails.reasoningTokens - Add
usage.inputTokenDetails.{cacheReadTokens, cacheWriteTokens}
@core-ai/openai
- Update usage mapping to the new nested
ChatUsagestructure. Mapsprompt_tokens_details.cached_tokenstoinputTokenDetails.cacheReadTokensandcompletion_tokens_details.reasoning_tokenstooutputTokenDetails.reasoningTokens.
@core-ai/anthropic
- Update usage mapping to the new nested
ChatUsagestructure. Reports totalinputTokensincluding cache tokens, mapscache_read_input_tokensandcache_creation_input_tokenstoinputTokenDetails.
@core-ai/google-genai
- Update usage mapping to the new nested
ChatUsagestructure. MapscachedContentTokenCounttoinputTokenDetails.cacheReadTokensandthoughtsTokenCounttooutputTokenDetails.reasoningTokens.
@core-ai/mistral
- Update usage mapping to the new nested
ChatUsagestructure with zero defaults for cache and reasoning token details.
@core-ai/core-ai
- Add first-class structured output support with
generateObject()andstreamObject()across core and all provider chat models. Introduces schema-driven typed object generation, structured output streaming events, and standardized structured-output errors. - Clarify embedding usage semantics by making
EmbedResult.usageoptional in the core API contract, so providers can returnusage: undefinedwhen token counts are not exposed by the underlying API.
@core-ai/openai
- Add structured output support with
generateObject()andstreamObject().
@core-ai/anthropic
- Add structured output support with
generateObject()andstreamObject().
@core-ai/google-genai
- Add structured output support with
generateObject()andstreamObject(). - Update embedding behavior to only include usage when token statistics are present, and add provider E2E contract coverage.
@core-ai/mistral
- Add structured output support with
generateObject()andstreamObject().
All packages
- Broaden Zod compatibility to support both Zod 3 and Zod 4 across all packages. Updates published Zod ranges and raises the minimum
zod-to-json-schemaversion to one that supports Zod 4, preventing peer dependency conflicts for projects already using Zod 4.