Tutorial - Running Autogen using a local LLM running in oobabooga WebUI served via LiteLLM #237
deronkel82
started this conversation in
Show and tell
Replies: 1 comment
-
Thanks. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Suggesting you have Python, Autogen and oobabooga WebUI installed and running fine:
In the folder where the WebUI is installed:
Run the WebUI with the openai extension enabled:
In this example the model TheBloke_WizardCoder-Python-13B-V1.0-GPTQ is used.
You should get a useful response to the question "Hi! How are you?".
If everything works fine a groupchat should begin with the task to write code for a calculator app.
I had multiple errors where the script ran into timeouts. If you have the same problem here is the fix that worked for me. I used the Pycharm IDE so the filepaths maybe different for you but the filenames are the same:
Locate the file api_requestor.py (path on my system: venv/Lib/site-packages/openai/api_requestor.py) and change the "TIMEOUT_SECS = 600" to 6000.
In the file completition.py (path on my system: venv/Lib/site-packages/autogen/oai/completion.py) change the "request_timeout = 60" to 600.
Beta Was this translation helpful? Give feedback.
All reactions