You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 4, 2025. It is now read-only.
🤖 Config path: /Users/mf412833/.talk_codebase_config.yaml:
? 🤖 Select model type: Local
? 🤖 Select model name: Llama-2-7B Chat | llama-2-7b-chat.ggmlv3.q4_0.bin | 3791725184 | 7 billion | q4_0 | LLaMA2
🤖 Model name saved!
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.79G/3.79G [03:30<00:00, 18.0MiB/s]
Model downloaded at: /Users/mf412833/.cache/gpt4all/llama-2-7b-chat.ggmlv3.q4_0.bin
gguf_init_from_file: invalid magic characters 'tjgg'
llama_model_load: error loading model: llama_model_loader: failed to load model from /Users/mf412833/.cache/gpt4all/llama-2-7b-chat.ggmlv3.q4_0.bin
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
File "/Users/mf412833/.pyenv/versions/3.10.0/bin/talk-codebase", line 8, in
sys.exit(main())
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/talk_codebase/cli.py", line 70, in main
raise e
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/talk_codebase/cli.py", line 63, in main
fire.Fire({
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/talk_codebase/cli.py", line 55, in chat
llm = factory_llm(repo.working_dir, config)
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/talk_codebase/llm.py", line 125, in factory_llm
return LocalLLM(root_dir, config)
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/talk_codebase/llm.py", line 24, in init
self.llm = self._create_model()
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/talk_codebase/llm.py", line 101, in _create_model
llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, n_batch=model_n_batch, callbacks=callbacks,
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/langchain/load/serializable.py", line 97, in init
super().init(**kwargs)
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in init
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for LlamaCpp root
Could not load Llama model from path: /Users/mf412833/.cache/gpt4all/llama-2-7b-chat.ggmlv3.q4_0.bin. Received error Failed to load model from file: /Users/mf412833/.cache/gpt4all/llama-2-7b-chat.ggmlv3.q4_0.bin (type=value_error)
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Failed to load model. Logs below
🤖 Config path: /Users/mf412833/.talk_codebase_config.yaml:
? 🤖 Select model type: Local
? 🤖 Select model name: Llama-2-7B Chat | llama-2-7b-chat.ggmlv3.q4_0.bin | 3791725184 | 7 billion | q4_0 | LLaMA2
🤖 Model name saved!
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.79G/3.79G [03:30<00:00, 18.0MiB/s]
Model downloaded at: /Users/mf412833/.cache/gpt4all/llama-2-7b-chat.ggmlv3.q4_0.bin
gguf_init_from_file: invalid magic characters 'tjgg'
llama_model_load: error loading model: llama_model_loader: failed to load model from /Users/mf412833/.cache/gpt4all/llama-2-7b-chat.ggmlv3.q4_0.bin
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
File "/Users/mf412833/.pyenv/versions/3.10.0/bin/talk-codebase", line 8, in
sys.exit(main())
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/talk_codebase/cli.py", line 70, in main
raise e
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/talk_codebase/cli.py", line 63, in main
fire.Fire({
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/talk_codebase/cli.py", line 55, in chat
llm = factory_llm(repo.working_dir, config)
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/talk_codebase/llm.py", line 125, in factory_llm
return LocalLLM(root_dir, config)
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/talk_codebase/llm.py", line 24, in init
self.llm = self._create_model()
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/talk_codebase/llm.py", line 101, in _create_model
llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, n_batch=model_n_batch, callbacks=callbacks,
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/langchain/load/serializable.py", line 97, in init
super().init(**kwargs)
File "/Users/mf412833/.pyenv/versions/3.10.0/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in init
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for LlamaCpp
root
Could not load Llama model from path: /Users/mf412833/.cache/gpt4all/llama-2-7b-chat.ggmlv3.q4_0.bin. Received error Failed to load model from file: /Users/mf412833/.cache/gpt4all/llama-2-7b-chat.ggmlv3.q4_0.bin (type=value_error)
The text was updated successfully, but these errors were encountered: