We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi,
Thanks for the simple guide, however, it's not working, anything mismatched?
./main -m models/ggml-vicuna-13B-1.1-q5_1.bin --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-vicuna-v1.txt
output
main: build = 732 (afd983c) main: seed = 1696926741 llama.cpp: loading model from models/ggml-vicuna-13B-1.1-q5_1.bin llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 512 llama_model_load_internal: n_embd = 5120 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 40 llama_model_load_internal: n_layer = 40 llama_model_load_internal: n_rot = 128 llama_model_load_internal: ftype = 9 (mostly Q5_1) llama_model_load_internal: n_ff = 13824 llama_model_load_internal: n_parts = 1 llama_model_load_internal: model size = 13B llama_model_load_internal: ggml ctx size = 0.00 MB error loading model: llama.cpp: tensor 'norm.weight' is missing from model llama_init_from_file: failed to load model llama_init_from_gpt_params: error: failed to load model 'models/ggml-vicuna-13B-1.1-q5_1.bin' main: error: unable to load model
The text was updated successfully, but these errors were encountered:
Really sorry, haven't had the time to keep this guide up to date. Just committed, let me know if it works
Sorry, something went wrong.
No branches or pull requests
Hi,
Thanks for the simple guide, however, it's not working, anything mismatched?
./main -m models/ggml-vicuna-13B-1.1-q5_1.bin --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-vicuna-v1.txt
output
The text was updated successfully, but these errors were encountered: