-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Other LLM models support? #8
Comments
Thanks for your attention, other llms are great, but unfortunately not planned at the moment, since I haven’t really used them myself. Hopefully someone else can submit a pr on this! |
测试ollama可以正常使用,现在每一个订阅都需要手动填写模型,如果可以的话希望能配置全局加入自定义模型,只需要在下拉框选一下,如果能够设置默认模型就更好了。 |
Hi! How about using litellm for model support ? NOTE: There seem to be models that cannot specify response_format: {type: "json"}. |
Thanks. I will check out the repo when I have time, in the meanwhile, I am not familiar with azure openai, do you need to set the base_url to call the azure openai models?
I think it will be fine. I have setup a fallback option to call the AI model without json mode. |
Yes. Azure OpenAI requires "base_url", "api_key", "api_version", "deployment_name". Using LiteLLM, these variables can be configured via environment variable. |
Hey,thanks for your hard work.
Do you have a plan which this project can support other LLM's API or local models.
Such as Kimi? 通义千问?讯飞星火? or Ollama?
The text was updated successfully, but these errors were encountered: