Skip to content

AI Initialization Failure, streamText Not Generating Output, and Long Request Time #5597

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Howmany-Zeta opened this issue Apr 8, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@Howmany-Zeta
Copy link

Description

Description:
When using the streamText function, I encountered the following issues:

  1. Long Request Time for streamText: The streamText method takes an excessively long time to respond and does not generate any streaming output. See the attached screenshot: [Pasted image 20250408153041.png].
  2. Suspected Cause - AI Initialization Failure: After adding logging to the index file, no errors were thrown, but there is also no confirmation of successful AI initialization.

Steps to Reproduce:

  1. Main code in index.ts:

    import { createXai } from '@ai-sdk/xai';
    import { wrapLanguageModel, LanguageModelV1, streamText as aiStreamText, Message } from "ai";
    import { customMiddleware } from "./custom-middleware";
    import * as dotenv from 'dotenv';
    
    // Load environment variables
    dotenv.config();
    
    // File loading log
    console.log('[Index] File loaded');
    
    // Logging function
    const log = (message: string, data?: any) => {
      console.log(`[AI Config] ${message}`, data || '');
    };
    
    // Check environment variables
    log('Environment variables:', {
      XAI_API_KEY: process.env.XAI_API_KEY ? process.env.XAI_API_KEY.slice(0, 4) + '...' : undefined,
    });
    
    export const xai = createXai({});
    export const xaiChatModel = wrapLanguageModel({
      model: xai("grok-2-1212"),
      middleware: customMiddleware,
    });
    
    // Check and set XAI_API_KEY
    if (!process.env.XAI_API_KEY) {
      throw new Error('XAI_API_KEY environment variable is not set');
    }
    
    // Add log: Confirm model initialization
    console.log('AI Config - Models initialized:', {
      xaiChatModel: 'grok-2-1212',
    });
  2. Main code in route.ts, calling the streamText method:

    import { convertToCoreMessages, Message, streamText } from "ai";
    import { xaiChatModel } from "@/ai";
    
    export async function POST(request: Request) {
      const { id, messages }: { id: string; messages: Array<Message> } =
        await request.json();
      
      const session = await auth();
    
      if (!session) {
        return new Response("Unauthorized", { status: 401 });
      }
    
      const coreMessages = convertToCoreMessages(messages).filter(
        (message) => message.content.length > 0,
      );
    
      try {
        // Add log: Print proxy settings (ensure they are applied)
        console.log('Route - Proxy settings:', {
          HTTP_PROXY: process.env.HTTP_PROXY,
          HTTPS_PROXY: process.env.HTTPS_PROXY,
        });
    
        console.log("Starting to stream text...");
        console.log("Core messages:", coreMessages);
        console.log("Session user:", session.user);
        console.log("Session user ID:", session.user?.id);
    
        const result = await streamText({
          model: xaiChatModel('grok-2-1212'),
          system: `
            - You are a helpful and intelligent AI assistant. Your role is to assist users with a wide range of tasks and questions. Follow these guidelines:
            - Analyze the user's input carefully and determine the intent.
            - Break down complex tasks into smaller, manageable steps if needed.
            - Provide clear, concise, and accurate responses.
            - If the user asks for something that requires external data or actions, use the available tools.
            - If a task is unclear, ask clarifying questions to better understand the user's needs.
            - Maintain a conversational tone, and be polite and professional.
            - Support multi-turn conversations by keeping track of the context.
            - Do not make assumptions about the user's intent unless explicitly stated.
            - If a task requires user input, wait for the user to respond before proceeding.
          `,
    
          messages: coreMessages,
          onFinish: async ({ responseMessages }) => {
            if (session.user && session.user.id) {
              try {
                await saveChat({
                  id,
                  messages: [...coreMessages, ...responseMessages],
                  userId: session.user.id,
                });
              } catch (error) {
                console.error("Failed to save chat");
              }
            }
          },
    
          experimental_telemetry: {
            isEnabled: true,
            functionId: "stream-text",
          },
        });
    
        return result.toDataStreamResponse({});

Expected Behavior:

  • The AI initialization should complete successfully without errors and output a success log.
  • The streamText request should return results within a reasonable time (e.g., 2-5 seconds).

Actual Behavior:

  • No errors are thrown, but the client does not receive any streamText response.
    Image

Environment Information:

  • ai-sdk Version: v4.2.10
  • ai-sdk/xai Version: v1.0.0
  • Node.js Version: v22.14.0
  • Operating System: Windows 11

Additional Information:

  • The XAI API key has been verified as valid and works in other tools.
  • No obvious network interruptions or server errors were found in the logs.

Suggestions or Questions:

  • Are there any parameters or configurations to avoid initialization failures or to output the execution result of index?
  • Are there known optimization solutions or fixes for the performance issues with streamText?

Thank you for the team's help!

Code example

No response

AI provider

No response

Additional context

No response

@Howmany-Zeta Howmany-Zeta added the bug Something isn't working label Apr 8, 2025
@lgrammel
Copy link
Collaborator

lgrammel commented Apr 8, 2025

Please check for errors with the onError callback: https://sdk.vercel.ai/docs/ai-sdk-core/generating-text#onerror-callback

@nicoalbanese
Copy link
Contributor

Hmm - something must be blocking the stream.

have you tried without the customMiddleware? can you also try removing the onFinish and the await keyword from streamText as it's no longer necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants