Are Qwen-Code a good reference? #331
Replies: 1 comment 2 replies
-
|
Llxprt was forked earlier. Qwen later. We took very different architectural approaches to the same issues. I don't think either supports Qwen3 Coder 480b better or worse at this point. However, LLxptt supports more models (GLM 4.6 for instance), local models and is more customizable. So say you want to use a small Qwen model you can shrink the prompts and fit them for that model. Both of us cherrypick features from Gemini-cli. Qwen became more selective earlier than LLxprt. Now we're moving in that direction as well because the Google Code Assist team that works on it is focused on tighter integration with Google's ecosystem rather than core functionality. LLxprt supports the oauth to Qwen for the free tier but also Gemini and Anthropic (Claude pro/max). Llxprt also let's you do things like change the model settings and temperature in a running session. Long term you're better off on LLxprt we think because Qwen is no longer the top open weight model (preliminary evals glm 4.6 outperforms it). We're also headed in different directions. Llxprt is headed not just for more multi-model/provider (including subagents) but autonomous development. Qwen is following more of the gemini/claude path. Also we're a bit more open to contributions. Each PR is reviewed and considered. Sometimes we take a different architectural approach but we try to make sure every user request is met if its in scope! Anyhow that's my opinion YMMV |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Qwen-Code seems to use open standard APIs similar to LLXPRT but I am not sure how they could differ in function. Two different forks, but one made to be truly non-partisan to any API https://github.com/QwenLM/qwen-code?tab=readme-ov-file#qwen-code
Beta Was this translation helpful? Give feedback.
All reactions