Local ollama models not being used/overridden by Claude or GPT #7724
Unanswered
willyB-sys
asked this question in
Help
Replies: 1 comment
-
Sorry to hear you are having trouble @willyB-sys. Also appreciate all the info shared. This is my current config that works, curious if you are able to compare and contrast with it. This is my root similar to what you shared, Also if you can share what your extension version is. My version is prerelease 1.3.10 name: bdougie Continue Config
version: 0.0.1
context:
- provider: diff
- provider: currentFile
- provider: codebase
- provider: folder
- provider: docs
- provider: tree
- provider: terminal
- provider: url
models:
- name: Autodetect
provider: ollama
model: AUTODETECT
roles:
- chat
- edit
- apply
- rerank
- autocomplete
capabilities:
- tool_use |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am about to pull my hair out.
I am running VSCode with the Continue extension on Ubuntu.
🛠️ System:
Motherboard ASUS WS C621E SAGE,
(2) Intel Xeon Platinum 8180,
256GB RAM,
4TB M.2,
(3) RTX 3090 GPU 24Gb VRAM total 72Gb VRAM
I have a working local ollama instance.
ollama list
NAME ID SIZE
qwen3-coder:30b 2d66cfbce738 32 GB
deepseek-r1:32b edba8017331d 19 GB
qwen2.5-coder:1.5b d7372fd82851 986 MB
codellama:7b 8fdf8f752f6e 3.8 GB
I have tested all models from the terminal and all run.
I have tried multiple ~/.continue/config.yaml settings with no luck correcting it. Here is the current (created through the 'Add Chat Model interface):
name: qwen3-coder 30b
version: 1.1.0
schema: v1
models:
provider: ollama
model: qwen3-coder:30b
roles:
capabilities:
I have also tried AUTODETECT, also with no luck.
I have also used the Continue 'Add Chat Model+' option to configure for ollama + my model.
I have tried using qwen3-coder, deepseek-r1, qwen2.5 and codellama - all without success.
I can get my model to display under the Models select list. But when I chat, it does NOT connect with my local model; it makes a call to either Claude or GPT.
I am out of ideas on what to change. I want to run this entirely local. I'm not clear on why Continue is making external calls to models that I have not selected, and why it is NOT engaging with my models.
Beta Was this translation helpful? Give feedback.
All reactions