AshCache is a production-grade, high-performance in-memory cache library for Go. Built for applications that demand predictable latency, intelligent admission control, and efficient memory management at scale.
- Sharded Architecture: 1024 independent shards minimize lock contention, enabling linear scalability across CPU cores
- Optimized Hot Paths: Critical operations (Get/Set) minimize allocations (~2-3 allocs per operation, ~56-58 bytes)
- Lock-Free Operations: Atomic counters and CAS-based algorithms reduce synchronization overhead
- High Throughput: 15M+ operations per second on modern hardware
- TinyLFU Algorithm: Count-Min Sketch with Doorkeeper prevents one-hit wonders from polluting the cache
- Frequency Estimation: 4-bit counters track access patterns with minimal memory overhead
- Adaptive Aging: Automatic counter decay maintains relevance over time
- LRU with Two Modes:
- Listing Mode: Precise LRU ordering for predictable eviction
- Sampling Mode: Redis-inspired sampling for lower overhead on large caches
- Soft & Hard Limits: Proactive eviction at soft threshold, guaranteed enforcement at hard limit
- Configurable Backoff: Tune eviction aggressiveness based on workload
- Background Refresh: Automatic cache warming before expiration
- Stochastic Refresh: Beta-distributed refresh times prevent thundering herd problems
- Rate Limiting: Configurable refresh rate to protect backend systems
- Remove or Refresh: Choose between automatic removal or background refresh on TTL expiry
- Delta-Based Telemetry: Interval-based metrics suitable for Grafana dashboards
- Per-Component Logging: Separate metrics for admission, eviction, and lifetime management
- Structured Logging: JSON logs with
slogfor easy integration
go get github.com/Borislavv/go-ash-cachepackage main
import (
"context"
"log/slog"
"github.com/Borislavv/go-ash-cache"
"github.com/Borislavv/go-ash-cache/config"
"github.com/Borislavv/go-ash-cache/internal/cache/db/model"
)
func main() {
ctx := context.Background()
logger := slog.Default()
// Create configuration
cfg := &config.Cache{
DB: config.DBCfg{
SizeBytes: 100 * 1024 * 1024, // 100MB cache
},
Eviction: &config.EvictionCfg{
LRUMode: config.LRUModeListing,
SoftLimitCoefficient: 0.8,
CallsPerSec: 10,
BackoffSpinsPerCall: 2048,
},
AdmissionControl: &config.AdmissionControlCfg{
Capacity: 10000,
Shards: 16,
MinTableLenPerShard: 64,
SampleMultiplier: 10,
DoorBitsPerCounter: 2,
},
}
cfg.AdjustConfig()
// Initialize cache
cache := ashcache.New(ctx, cfg, logger)
defer cache.Close()
// Use the cache
data, err := cache.Get("user:123", func(item model.AshItem) ([]byte, error) {
// This callback is only executed on cache miss
// Fetch data from your data source
return []byte("cached data"), nil
})
if err != nil {
panic(err)
}
// data is now cached and will be returned on subsequent calls
_ = data
}AshCache uses YAML configuration files for production deployments. All components are optional and can be enabled independently.
db:
size: 1073741824 # 1GB hard limit
stat_logs_enabled: true
telemetry_logs_interval: 5s
cache_time_enabled: trueeviction:
mode: listing # or "sampling"
soft_limit_coefficient: 0.8 # Start evicting at 80% capacity
calls_per_sec: 10
backoff_spins_per_call: 4096admission_control:
capacity: 100000
shards: 32
min_table_len_per_shard: 128
sample_multiplier: 10
door_bits_per_counter: 2lifetime:
on_ttl: refresh # or "remove"
ttl: 24h
rate: 100 # Max 100 refreshes per second
coefficient: 0.5 # Start refreshing at 50% of TTL
beta: 0.4
stochastic_refresh_enabled: truecfg, err := config.LoadConfig("cache.yaml")
if err != nil {
log.Fatal(err)
}
cache := ashcache.New(ctx, cfg, logger)// Get retrieves a value, calling callback on miss
data, err := cache.Get("key", func(item model.AshItem) ([]byte, error) {
// Fetch and return data
return fetchData(), nil
})
// Delete removes an entry
ok := cache.Del("key")
// Clear removes all entries
cache.Clear()
// Get cache statistics
len := cache.Len() // Number of entries
mem := cache.Mem() // Memory usage in bytes// Cache metrics (admission control)
allowed, notAllowed, hardEvictedItems, hardEvictedBytes := cache.CacheMetrics()
// Eviction metrics
scans, hits, evictedItems, evictedBytes := cache.EvictorMetrics()
// Lifetime metrics
affected, errors, scans, hits, misses := cache.LifetimerMetrics()// Set TTL per item in callback
cache.Get("key", func(item model.AshItem) ([]byte, error) {
item.SetTTL(1 * time.Hour) // Custom TTL for this item
return data, nil
})- Entry weight =
sizeof(Entry)+cap(payload) - Soft limit triggers proactive eviction
- Hard limit enforces strict memory bounds
Listing Mode (Recommended for most cases):
- Precise LRU ordering
- Better hit rate prediction
- Slightly higher CPU overhead
Sampling Mode (For very large caches):
- Lower CPU overhead
- Good enough for most workloads
- Scales better with cache size
- Capacity: Should match expected cache size
- Shards: More shards = less contention, more memory
- SampleMultiplier: Higher = less frequent aging, more memory per counter
AshCache is fully thread-safe. All operations can be called concurrently from multiple goroutines without additional synchronization.
Cache expensive API responses with automatic refresh:
response, err := cache.Get("api:user:123", func(item model.AshItem) ([]byte, error) {
item.SetTTL(5 * time.Minute)
return fetchUserFromAPI(123)
})Cache database queries with admission control to avoid cache pollution:
results, err := cache.Get("query:popular:products", func(item model.AshItem) ([]byte, error) {
return db.QueryPopularProducts()
})Cache expensive computations:
result, err := cache.Get("compute:fibonacci:1000", func(item model.AshItem) ([]byte, error) {
return computeFibonacci(1000), nil
})The cache uses 1024 independent shards, each with its own lock. Keys are distributed across shards using a hash function, ensuring even distribution and minimal contention.
- Doorkeeper: First access sets a bit (Bloom-like filter)
- Sketch: Subsequent accesses increment frequency counters
- Decision: New items are admitted only if their frequency estimate exceeds the victim's
- Soft Limit: Background evictor starts when memory exceeds soft threshold
- Hard Limit: Immediate eviction when memory exceeds hard limit
- Victim Selection: LRU-based (listing) or sampled (sampling mode)
- Detection: Background scanner finds expired entries
- Rate Limiting: Refresh rate is capped to protect backends
- Stochastic Timing: Beta distribution prevents synchronized refreshes
AshCache is designed for high-throughput scenarios. Performance measured on Apple M2 Max (12 cores):
| Operation | Latency | Throughput | Allocations |
|---|---|---|---|
| Get (hit) | ~65ns | ~15M ops/sec | 2 allocs (56B) |
| Get (miss) | ~85ns | ~12M ops/sec | 3 allocs (58B) |
| Set | ~85ns | ~12M ops/sec | 3 allocs (58B) |
| Get (hit, parallel) | ~190ns | ~5M ops/sec/core | 2 allocs (56B) |
| Get (mixed 80/20, parallel) | ~60ns | ~17M ops/sec/core | 2 allocs (56B) |
| Concurrent throughput | ~45ns | ~20M ops/sec | 2 allocs (56B) |
Benchmarks run with 1KB payload size. Actual performance depends on workload, cache size, payload size, and system configuration. Run go test -bench=. ./... to benchmark on your system.
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
# Clone the repository
git clone https://github.com/Borislavv/go-ash-cache.git
cd go-ash-cache
# Run tests
go test ./...
# Run tests with race detector
go test -race ./...
# Run benchmarks
go test -bench=. ./...
# Run specific benchmark with detailed output
go test -bench=BenchmarkGetHit -benchmem -benchtime=3sThis project is licensed under the Apache License 2.0 - see the LICENCE file for details.
- TinyLFU algorithm by Gil Einziger et al.
- Count-Min Sketch implementation inspired by academic research
- Stochastic cache expiration based on Google's Staleness paper (RFC 5861)
- groupcache - Distributed caching library
- bigcache - Fast, concurrent cache
- freecache - Zero GC cache
Made with ❤️ for the Go community