Core AI makes streaming responses just as easy. Here’s how to stream text as it’s generated:
streaming.ts
Copy
import { stream } from '@core-ai/core-ai';import { createOpenAI } from '@core-ai/openai';const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });const model = openai.chatModel('gpt-5-mini');const result = await stream({ model, messages: [ { role: 'user', content: 'Write a short haiku about TypeScript.' }, ],});// Stream each text chunk as it arrivesfor await (const event of result) { if (event.type === 'text-delta') { process.stdout.write(event.text); }}// Get the complete response with metadataconst response = await result.toResponse();console.log('\nFinish reason:', response.finishReason);console.log('Usage:', response.usage);
The toResponse() method aggregates the stream into a complete response object. You can call it after iterating through the stream.