This repository was archived by the owner on Jun 24, 2024. It is now read-only.
Update to the latest llama.cpp
ggml
#118
Labels
llama.cpp
ggml
#118
Another day, another suite of changes. Since our last update, M1 and AVX inference has improved, as well as a bunch of other improvements: https://github.com/ggerganov/llama.cpp/compare/437e77855a54e69c86fe03bc501f63d9a3fddb0e..HEAD
If you're interested in tackling this, check out our contributing document: https://github.com/rustformers/llama-rs/blob/main/CONTRIBUTING.md
The text was updated successfully, but these errors were encountered: