-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
Before submitting your bug report
- I've tried using the "Ask AI" feature on the Continue docs site to see if the docs have an answer
- I believe this is a bug. I'll try to join the Continue Discord for questions
- I'm not able to find an open issue that reports the same bug
- I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS: Windows
- Continue version: 1.2.8
- IDE version: 1.104.2
- Model: qwen3:14b@ollama
- config:
models:
- name: qwen3:4b¬hink - ollama
provider: ollama
model: qwen3:4b
capabilities: [tool_use]
roles:
- chat
- edit
- autocomplete
- apply
- summarize
requestOptions:
extraBodyProperties:
think: false
keep_alive: -1
defaultCompletionOptions:
contextLength: 6144
maxTokens: 2048
Description
Starting now, models with thinking capabilities can no longer provide thought processes. The thinking function for all models has been turned off and cannot be reactivated.
As can be seen from the debugging, the configuration sent to ollama by default disables the model's thinking functionality, even if the user has not set it this way:
....
Options
{
"contextLength": 16384,
"maxTokens": 6144,
"model": "qwen3:14b",
"stop": [
"<|im_start|>",
"<|im_end|>"
],
"reasoning": false
}
Normally, it should not be necessary to configure the option to disable the thinking process for each model; instead, this option should be left to the user's choice. However, I have not been able to find any related configurable options at the moment.
It might have been a code submission for this plugin that mistakenly set all models to disable thinking. We hope this issue can be fixed as soon as possible, and allow users to enable the thinking feature for models themselves (the thinking feature should be enabled by default for all models, unless the model is specifically designated as an autocomplete model or a model used for generating titles)
To reproduce
- Use an ollama model with a thinking function
- Have a conversation
- Finding that the thinking process that should have been there has disappeared
Log output
CONTINUE CONSOLE:
...
Options
{
"contextLength": 16384,
"maxTokens": 6144,
"model": "qwen3:14b",
"stop": [
"<|im_start|>",
"<|im_end|>"
],
"reasoning": false
}
...
Metadata
Metadata
Assignees
Labels
Type
Projects
Status