-
Notifications
You must be signed in to change notification settings - Fork 86
feat: implement v1/chat/completions endpoint as stream consumption
#186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@LeadcodeDev Thank you for adding the stream and for the pull request! This kind of error was returned. Could you look into it? |
|
Hello 👋 This mistake is not actually one. I will withdraw the postponement of this error because we do not wish to treat it but rather wait for the next iteration to aggregate the rest of our response. |
Was it expected that the "Content" section was empty and nothing was displayed? |
|
In the I have try with Gemini and custom URL. #[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api_key = env::var("OPENAI_API_KEY").unwrap().to_string();
let mut client = OpenAIClient::builder()
.with_api_key(api_key)
.with_endpoint("https://generativelanguage.googleapis.com/v1beta/openai/")
.build()?;
let req = ChatCompletionStreamRequest::new(
"gemini-2.5-flash".to_string(),
vec![chat_completion::ChatCompletionMessage {
role: chat_completion::MessageRole::user,
content: chat_completion::Content::Text(String::from("What is bitcoin?")),
name: None,
tool_calls: None,
tool_call_id: None,
}],
);
let mut result = client.chat_completion_stream(req).await?;
while let Some(response) = result.next().await {
match response.clone() {
ChatCompletionStreamResponse::ToolCall(toolcalls) => {
println!("Tool Call: {:?}", toolcalls);
}
ChatCompletionStreamResponse::Content(content) => {
println!("Content: {:?}", content);
}
ChatCompletionStreamResponse::Done => {
println!("Done");
}
}
}
Ok(())
}I haven't chatgpt key to try like you 😢 EDIT : Could you try with another models to see if the problem exists ? |
|
I found the cause of the error with the OpenAI API. There was multiple lines of data like this. Since I've identified the cause, I'll go ahead and merge this PR for now, and then make the fix on my end. |
|
I thank you very much for the acceptance of my PR (although not 100% functional you have to believe...) as well as for the patch made afterwards. Looking forward to contributing |
Hello (again) 👋
For the same project that I have to make my first contribution to this project, I allow myself to submit a second to support the streaming on the endpoint
v1/chat/completionsallowing at the same time to respond to the request expressed here #167.Looking forward to exchanging if necessary.
Have a nice day !