15
Qwen2.5-Coder-7B
(sh.itjust.works)
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
I have found the problem with the cut off, by default aider only sends 2048 tokens to ollama, this is why i have not noticed it anywhere else except for coding.
When running
/tokens
in aider:Even though it will only send 2048 tokens to ollama.
To fix it i needed to add a file
.aider.model.settings.yml
to the repository:That's because ollama's default max ctx is 2048, as far as I know.