Skip to content

GustavoQnt/swr-lru-cache

@gustavoqnt/ttl-cache

npm version npm downloads CI license

Traffic Dashboard

GitHub traffic snapshot (rolling window reported by GitHub API).

Totals since 2026-02-12: 268 views and 903 clones.

Date Views Clones
2026-04-05 0 4
2026-04-04 0 11
2026-04-03 0 14
2026-04-02 1 15
2026-04-01 0 7
2026-03-31 0 10
2026-03-30 0 15
2026-03-29 0 13
2026-03-28 0 10
2026-03-27 1 7
2026-03-26 0 9
2026-03-25 0 7
2026-03-24 0 8
2026-03-23 0 9
2026-03-22 0 2
2026-03-21 1 6
2026-03-20 0 10
2026-03-19 0 9
2026-03-18 0 8
2026-03-17 0 8
2026-03-16 0 12
2026-03-15 0 8
2026-03-14 0 11
2026-03-13 0 12
2026-03-12 1 11
2026-03-11 0 9
2026-03-10 0 7
2026-03-09 0 12
2026-03-08 0 13
2026-03-07 0 9
2026-03-06 0 8
2026-03-05 0 12
2026-03-04 27 7
2026-03-03 1 11
2026-03-02 4 11
2026-03-01 0 11
2026-02-28 0 9
2026-02-27 1 11
2026-02-26 0 8
2026-02-25 0 8
2026-02-24 6 4
2026-02-23 0 16
2026-02-22 0 13
2026-02-21 0 9
2026-02-20 0 10
2026-02-19 1 12
2026-02-18 12 40
2026-02-17 6 9
2026-02-16 6 15
2026-02-15 11 36
2026-02-14 20 50
2026-02-13 94 146
2026-02-12 75 161

Updated automatically: 2026-04-07T05:19:56.072Z

In-memory TTL cache with LRU eviction, stale-while-revalidate (SWR), and request deduplication. TypeScript-first, zero dependencies.


Quick Start

npm install @gustavoqnt/ttl-cache

Basic usage

import { TtlCache } from '@gustavoqnt/ttl-cache';

const cache = new TtlCache<string, User>({
  ttlMs: 60_000,  // 1 minute (default: 30s)
  maxSize: 1_000, // LRU eviction when exceeded
});

cache.set('user:1', { id: 1, name: 'Alice' });
const user = cache.get('user:1'); // User | undefined

Stale-While-Revalidate

const user = await cache.getOrSet(
  'user:1',
  () => fetchUserFromDb(1),
  { ttlMs: 60_000, swrMs: 10_000 },
);
|--- fresh (60s) ---|--- stale/SWR (10s) ---|--- expired ---|
0                 60s                      70s
                   │                        │
                   └─ TTL expires           └─ fully expired
                      serve stale +            call loader,
                      background refresh       wait for result

Request Deduplication

// 3 concurrent calls = 1 loader execution
const [a, b, c] = await Promise.all([
  cache.getOrSet('user:1', () => fetchUser(1)),
  cache.getOrSet('user:1', () => fetchUser(1)),
  cache.getOrSet('user:1', () => fetchUser(1)),
]);

AbortSignal Support

const controller = new AbortController();

// Cancel this caller's wait without killing the loader for others
const user = await cache.getOrSet(
  'user:1',
  () => fetchUser(1),
  { signal: controller.signal },
);

Comparison

Feature @gustavoqnt/ttl-cache lru-cache node-cache
TTL
LRU eviction
Stale-while-revalidate ⚠️ partial
Request dedup
AbortSignal
TypeScript-first ⚠️
Zero deps
ESM + CJS CJS only

API Reference

new TtlCache<K, V>(options?)

Option Type Default Description
ttlMs number 30000 Default TTL in ms.
maxSize number undefined Max entries. LRU eviction when exceeded.
onEvict (key, value, reason) => void Callback on eviction. Reason: 'expired' | 'evicted' | 'manual' | 'clear'
cleanupIntervalMs number | false false Periodic sweep interval. Disabled by default — expiration is lazy + manual via prune().

Sync Methods

Method Returns Description
set(key, value, options?) void Set a value. Optional per-key ttlMs.
get(key) V | undefined Get a value. Returns undefined if not fresh. Does not serve stale.
peek(key) V | undefined Get without updating LRU order. Returns fresh or stale values.
has(key) boolean Check existence (fresh or stale — not fully expired).
delete(key) boolean Remove an entry. Returns true if found.
clear() void Remove all entries.
size number Current entry count (including stale).
prune() void Manually remove all fully expired entries.

Async Methods

Method Returns Description
getOrSet(key, loader, options?) Promise<V> Get from cache or populate via loader. The only method that serves stale values (SWR). Deduplicates by default.

GetOrSetOptions

Option Type Default Description
ttlMs number constructor ttlMs TTL for this entry.
swrMs number 0 SWR window after TTL expires.
dedupe boolean true Deduplicate concurrent calls for the same key.
signal AbortSignal Cancel this caller's wait. Does not cancel the loader for other waiters.

Observability

Method Description
getStats() Returns { hits, misses, stale, loads, evictions, size }
on(event, handler) Subscribe to: 'hit', 'miss', 'set', 'evict', 'load', 'stale'
off(event, handler) Unsubscribe

Iteration

for (const [key, value] of cache) { /* only fresh entries */ }
for (const key of cache.keys()) { /* ... */ }
for (const value of cache.values()) { /* ... */ }
for (const [key, value] of cache.entries()) { /* ... */ }
  • Yields only fresh entries (not stale, not expired).
  • Expired entries are lazily pruned during iteration.

Cleanup

cache.dispose(); // Clears periodic sweep timer (if enabled) and all event listeners

Design Decisions

Why get() doesn't return stale values

get() is predictable: it returns fresh data or undefined. SWR is an async concern — it involves background loaders and error handling. That's why SWR is exclusive to getOrSet().

If you need to read a value without caring about freshness, use peek().

Why cleanup is disabled by default

Automatic timers in libraries are controversial — they can keep the process alive unexpectedly. By default, expiration is lazy (cleaned on access) and manual (via prune()).

If you want periodic cleanup, enable it explicitly:

const cache = new TtlCache({
  cleanupIntervalMs: 60_000, // sweep every 60s
});
// Timer is unref()'d in Node.js — won't keep the process alive

SWR window is after TTL

The mental model is intuitive:

|--- fresh (ttlMs) ---|--- stale/SWR (swrMs) ---|--- expired ---|

TTL expires → stale window opens → fully expired. This matches the HTTP stale-while-revalidate semantics.


Use Cases

  • Database query caching — Cache expensive queries with TTL and SWR for near-zero latency reads
  • API response caching — Deduplicate concurrent requests to the same upstream endpoint
  • Configuration caching — Long TTL + SWR for hot-reload without downtime
  • Rate limit friendly — Dedup prevents thundering herd when multiple consumers request the same resource

Performance

Local benchmarks (Node.js v22, indicative):

Operation Throughput Notes
set 10k entries 703 ops/s ~1.4ms per batch
get 10k entries (100% hit) 445 ops/s ~2.2ms per batch
LRU eviction (20k→10k) 244 ops/s ~4.1ms per batch
Dedup: 100 concurrent getOrSet 97 ops/s 1 loader execution shared
SWR: serve stale immediately 51 ops/s No blocking on loader

Run benchmarks yourself:

pnpm bench

See BENCHMARK.md for detailed instructions.


Gotchas

  • Memory: No built-in memory-size limit — use maxSize to bound entry count
  • Dispose: Always call dispose() if you enabled cleanupIntervalMs
  • SWR errors: Background refresh errors are silently swallowed; the stale value stays until fully expired
  • Dedup errors: If a loader throws, the error propagates to all concurrent waiters
  • AbortSignal: Aborting cancels the caller's wait, not the loader execution (other waiters and the cache still benefit from the load)

License

MIT

About

In-memory TTL cache with LRU eviction, stale-while-revalidate, and request deduplication. Zero dependencies.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors