How to make the conversational application created using dify use streaming output? #3767
Replies: 1 comment
-
Hey there, @majestichou! Great to see you diving into another interesting challenge. To enable streaming output for a conversational application when integrating with the Qwen API, you can set the "response_mode": "streaming", This is demonstrated in the sample call for chat-messages in the Dify API documentation. As for the llm_result: LLMResult = model_instance.invoke_llm(
prompt_messages=prompt_messages,
model_parameters=app_orchestration_config.model_config.parameters,
tools=[],
stop=app_orchestration_config.model_config.stop,
stream=True, # Enable streaming
user=self.user_id,
callbacks=[],
) In this example, setting Unfortunately, I wasn't able to find specific information on whether the Sources
|
Beta Was this translation helpful? Give feedback.
-
Self Checks
Provide a description of the help you need
How to make the conversational application created using dify use streaming output? I deploy Dify by docker compose according to official docs Docker Compose Deployment
I create an application using Dify with Qwen API. But when I talk to the big language model, the AI's answer is not streaming output.
Does Dify have parameter configuration and can use streaming output?
Beta Was this translation helpful? Give feedback.
All reactions