Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: [autogen_ext] "ValueError: No stop reason found" always raised when llm usage returned #4875

Open
nomagicln opened this issue Jan 1, 2025 · 0 comments

Comments

@nomagicln
Copy link

Describe the bug
When executing the sample code, this error is always thrown: "ValueError: No stop reason found".

Steps to reproduce

import asyncio

from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_core.models import UserMessage, ModelInfo, ModelFamily

client = OpenAIChatCompletionClient(
    model="deepseek-chat",
    base_url="https://api.deepseek.com/v1",
    api_key="sk-xxxxx",
    model_info=ModelInfo(
        vision=False,
        function_calling=False,
        json_output=False,
        family=ModelFamily.UNKNOWN,
    ),
)


async def main():
    messages = [
        UserMessage(content="OUTPUT 1 only", source="user"),
    ]

    # Create a stream.
    stream = client.create_stream(messages=messages)

    # Iterate over the stream and print the responses.
    print("Streamed responses:")
    async for response in stream:  # type: ignore
        if isinstance(response, str):
            # A partial response is a string.
            print(response, flush=True, end="")
        else:
            # The last response is a CreateResult object with the complete message.
            print("\n\n------------\n")
            print("The complete response:", flush=True)
            print(response.content, flush=True)
            print("\n\n------------\n")
            print("The token usage was:", flush=True)
            print(response.usage, flush=True)


if __name__ == "__main__":
    asyncio.run(main())

Model Used
deepseek-chat

Expected Behavior
execute normally

Screenshots and logs
No response

Additional Information

I found the place where the problem occurs. According to the code logic here, only when the usage is None and the stop_reason is also None will the finish_reason be set to the stop_reason. It's very strange why this is done.

https://github.com/microsoft/autogen/blame/8a83262a905675f15dfca387ba4a8de7b6cf0635/python/packages/autogen-ext/src/autogen_ext/models/openai/_openai_client.py#L695

stop_reason = None
# ...
stop_reason = choice.finish_reason if chunk.usage is None and stop_reason is None else stop_reason
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant