Replies: 1 comment
-
|
Unfortunately I don't think there's much, that CodeCompanion can do here. You can delete context from the chat buffer. And we only return token counts from the LLM endpoints when we receive a response back. A way around this would be to open up multiple chat buffers and work out what the maximum amount of context you can get away with is. Sadly, this is just one of the downsides of LLMs at this moment in time. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I've already ran several times into the problem that cc loads too much into the context. For example, I try to build a workspace file, vectorcode adds some files to the context, pats, token limit exceeded. There also does not seem to be any way to recover from that as far as I can tell (but I only just started using cc so I may have missed it).
Is there any way to tell cc: don't exceed this token limit?
Beta Was this translation helpful? Give feedback.
All reactions