You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Context: I tried to send chunked texts to an embedding model using Ollama and got the following error:
.venv/lib/python3.12/site-packages/ollama/_client.py", line 120, in _request_raw
raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: {}
It was not clear what happened and I had to check the failed batch of examples and examine what happened. Eventually I was able to figure out the reason is that the chunk limit is higher than the allowed input size, and thus the server returns error for such long inputs.
I'm wondering if it's possible to produce more helpful error message in such cases?
The text was updated successfully, but these errors were encountered:
Hey @gamer-mitsuha - sorry about that. Seems like the Ollama server didn't give an output back or we should have captured that. Thanks for raising - will take a look!
Context: I tried to send chunked texts to an embedding model using Ollama and got the following error:
It was not clear what happened and I had to check the failed batch of examples and examine what happened. Eventually I was able to figure out the reason is that the chunk limit is higher than the allowed input size, and thus the server returns error for such long inputs.
I'm wondering if it's possible to produce more helpful error message in such cases?
The text was updated successfully, but these errors were encountered: