Support for local LLM runs by llama.cpp: server? #4345
victorx98
started this conversation in
Suggestion
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Tried various approaches(openai compatible, LocalAI and etc), but always got some authentication issue
Beta Was this translation helpful? Give feedback.
All reactions