You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 4, 2025. It is now read-only.
(.venv) aloha11:talk-codebase liao6$ rm /Users/liao6/.cache/gpt4all/starcoderbase-3b-ggml.bin
(.venv) aloha11:talk-codebase liao6$ talk-codebase chat .
🤖 Config path: /Users/liao6/.talk_codebase_config.yaml:
100%|██████████████████████████████████████████████████████████████████████████████████████████████| 7.50G/7.50G [12:42<00:00, 9.84MiB/s]
Model downloaded at: /Users/liao6/.cache/gpt4all/starcoderbase-3b-ggml.bin
llama.cpp: loading model from /Users/liao6/.cache/gpt4all/starcoderbase-3b-ggml.bin
error loading model: unexpectedly reached end of file
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
File "/Users/liao6/workspace/talk-codebase/.venv/bin/talk-codebase", line 8, in <module>
sys.exit(main())
File "/Users/liao6/workspace/talk-codebase/.venv/lib/python3.9/site-packages/talk_codebase/cli.py", line 55, in main
raise e
File "/Users/liao6/workspace/talk-codebase/.venv/lib/python3.9/site-packages/talk_codebase/cli.py", line 48, in main
fire.Fire({
File "/Users/liao6/workspace/talk-codebase/.venv/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/Users/liao6/workspace/talk-codebase/.venv/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/Users/liao6/workspace/talk-codebase/.venv/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/Users/liao6/workspace/talk-codebase/.venv/lib/python3.9/site-packages/talk_codebase/cli.py", line 41, in chat
llm = factory_llm(root_dir, config)
File "/Users/liao6/workspace/talk-codebase/.venv/lib/python3.9/site-packages/talk_codebase/llm.py", line 118, in factory_llm
return LocalLLM(root_dir, config)
File "/Users/liao6/workspace/talk-codebase/.venv/lib/python3.9/site-packages/talk_codebase/llm.py", line 23, in __init__
self.llm = self._create_model()
File "/Users/liao6/workspace/talk-codebase/.venv/lib/python3.9/site-packages/talk_codebase/llm.py", line 96, in _create_model
llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, n_batch=model_n_batch, callbacks=callbacks, verbose=False)
File "/Users/liao6/workspace/talk-codebase/.venv/lib/python3.9/site-packages/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp
__root__
Could not load Llama model from path: /Users/liao6/.cache/gpt4all/starcoderbase-3b-ggml.bin. Received error (type=value_error)
Exception ignored in: <function Llama.__del__ at 0x128393a60>
Traceback (most recent call last):
File "/Users/liao6/workspace/talk-codebase/.venv/lib/python3.9/site-packages/llama_cpp/llama.py", line 1445, in __del__
if self.ctx is not None:
AttributeError: 'Llama' object has no attribute 'ctx'
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The text was updated successfully, but these errors were encountered: