-
Notifications
You must be signed in to change notification settings - Fork 295
Description
Describe what you are looking for
Currently, when a vector is removed from a usearch::Index using .remove(), the node is marked as a tombstone, but the underlying memory is never actually freed back to the OS.
In long-running, write-heavy applications (like vector databases or AI agent memory systems), this leads to severe memory bloat. For example, if millions of embeddings are added and subsequently removed over time,
usearch will hold gigabytes of unused RAM even if the active vector count is very low. I am looking for a native compact() or shrink_to_fit() method that drops these tombstones and reorganizes the internal graph to actually free up memory, avoiding the need to manually build a brand-new index and stream all the alive vectors into it just to reclaim RAM.
What i am looking for:
A method on the index (index.compact()) that performs in-place memory reclamation.
let mut index = usearch::new_index(&options).unwrap();
// Insert a large number of vectors
for i in 0..1_000_000 {
index.add(i, &vec![0.0; 128]).unwrap();
}
// Remove 90% of them
for i in 0..900_000 {
index.remove(i).unwrap();
}
// At this point, [usearch](cci:1://file:///Users/limemanas/Desktop/projects/CortexaDB/crates/cortexadb-core/src/index/hnsw.rs:30:4-35:5) still holds RAM for 1,000,000 vectors.
// Feature request: A way to reclaim the memory in-place
index.compact().unwrap();
// Now [usearch](cci:1://file:///Users/limemanas/Desktop/projects/CortexaDB/crates/cortexadb-core/src/index/hnsw.rs:30:4-35:5) should only actively hold RAM for the 100,000 remaining vectors.If pure in-place compaction compromises search latency too much during the operation, even an officially supported internal rebuild/swap utility that safely drops the unreferenced memory would be highly beneficial.
Can you contribute to the implementation?
- I can contribute
Is your feature request specific to a certain interface?
Other bindings
Contact Details
Is there an existing issue for this?
- I have searched the existing issues
Code of Conduct
- I agree to follow this project's Code of Conduct