Releases: TabbyML/tabby
Releases · TabbyML/tabby
v0.5.0-rc.4
v0.5.0-rc.4
v0.5.0
Release 0.5.0 [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Generated by cargo-workspaces
v0.5.0-rc.3
v0.5.0-rc.3
v0.5.0-rc.2
v0.5.0-rc.2
v0.5.0-rc.0
⚠️ Notice
- llama.cpp backend (CPU, Metal) now requires a redownload of gguf model due to upstream format changes: #645 ggml-org/llama.cpp#3252
- Due to indexing format changes, the
~/.tabby/index
needs to be manually removed before any further runs oftabby scheduler
. TABBY_REGISTRY
is replaced withTABBY_DOWNLOAD_HOST
for the github based registry implementation.
🚀 Features
- Improved dashboard UI.

🧰 Fixes and Improvements
- Cpu backend is switched to llama.cpp: #638
- add
server.completion_timeout
to control the code completion interface timeout: #637 - Cuda backend is switched to llama.cpp: #656
- Tokenizer implementation is switched to llama.cpp, so tabby no longer need to download additional tokenizer file: #683
💫 New Contributors
- @CrCs2O4 made their first contribution in #597
- @yusiwen made their first contribution in #620
- @gjedeer made their first contribution in #635
- @XpycT made their first contribution in #634
- @HKABIG made their first contribution in #662
Full Changelog: v0.4.0...v0.5.0
v0.4.0
🚀 Features
- Added support for golang: #553
- Added support for ruby: #597
- Supports using local directory for
Repository.git_url
: usefile:///path/to/repo
to specify a local directory. - Introduced a new UI design for webserver.

🧰 Fixes and Improvements
- Improve snippets retrieval by dedup candidates to existing content + snippets: #582
💫 New Contributors
- @sensinsane made their first contribution in #548
- @CrCs2O4 made their first contribution in #597
- @yusiwen made their first contribution in #620
Full Changelog: v0.3.1...v0.4.0
v0.4.0-rc.0
v0.4.0-rc.0
v0.3.1
🧰 Fixes and Improvements
Full Changelog: v0.3.0...v0.3.1
v0.3.1-rc.0
v0.3.1-rc.0
v0.3.0
🚀 Features
Retrieval-Augmented Code Completion Enabled by Default
The currently supported languages are:
- Rust
- Python
- JavaScript / JSX
- TypeScript / TSX
A blog series detailing the technical aspects of Retrieval-Augmented Code Completion will be published soon. Stay tuned!
Example prompt for retrieval-augmented code completion
// Path: crates/tabby/src/serve/engine.rs
// fn create_llama_engine(model_dir: &ModelDir) -> Box<dyn TextGeneration> {
// let options = llama_cpp_bindings::LlamaEngineOptionsBuilder::default()
// .model_path(model_dir.ggml_q8_0_file())
// .tokenizer_path(model_dir.tokenizer_file())
// .build()
// .unwrap();
//
// Box::new(llama_cpp_bindings::LlamaEngine::create(options))
// }
//
// Path: crates/tabby/src/serve/engine.rs
// create_local_engine(args, &model_dir, &metadata)
//
// Path: crates/tabby/src/serve/health.rs
// args.device.to_string()
//
// Path: crates/tabby/src/serve/mod.rs
// download_model(&args.model, &args.device)
} else {
create_llama_engine(model_dir)
}
}
fn create_ctranslate2_engine(
args: &crate::serve::ServeArgs,
model_dir: &ModelDir,
metadata: &Metadata,
) -> Box<dyn TextGeneration> {
let device = format!("{}", args.device);
let options = CTranslate2EngineOptionsBuilder::default()
.model_path(model_dir.ctranslate2_dir())
.tokenizer_path(model_dir.tokenizer_file())
.device(device)
.model_type(metadata.auto_model.clone())
.device_indices(args.device_indices.clone())
.build()
.
Following the instruction in https://tabby.tabbyml.com/docs/configuration to enable repository context for your Tabby instance.
🧰 Fixes and Improvements
- Fix Issue #511 by marking ggml models as optional.
- Improve stop words handling by combining RegexSet into Regex for efficiency.
💫 New Contributors
- @sensinsane made their first contribution in #548
Full Changelog: v0.2.2...v0.3.0