-
-
Notifications
You must be signed in to change notification settings - Fork 70
refactor: fix preloader causing incorrect profile popout #203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThis change refactors the preloading utility to implement in-memory caching with a 10-second validity window and request deduplication using JSON-serialized argument keys. A token-based mechanism tracks active requests, ensuring only the latest call updates the cache after async completion, while the public API remains unchanged. Changes
Sequence DiagramsequenceDiagram
participant Caller
participant Preloader
participant Cache
participant AsyncFn as Async Function
rect rgba(100, 200, 150, 0.2)
note right of Caller: New: Cache Hit (within 10s window)
Caller->>Preloader: run(args)
Preloader->>Cache: check cache[argsKey]
Cache-->>Preloader: return cached result
Preloader-->>Caller: Promise.resolve(cached)
end
rect rgba(150, 180, 220, 0.2)
note right of Caller: New: Cache Miss or Deduplication
Caller->>Preloader: run(args)
Preloader->>Cache: check cache[argsKey]
Cache-->>Preloader: miss or expired
Preloader->>Preloader: issue activeRequest token
par Concurrent Calls with Same Args
Caller->>Preloader: run(same args)
Preloader->>Preloader: reuse activeRequest token
end
Preloader->>AsyncFn: invoke actual async function
AsyncFn-->>Preloader: result
Preloader->>Cache: store result with token
Preloader-->>Caller: Promise with result
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/common/createPreloader.ts (1)
22-27: Add error handling to prevent token leaks.If
fun(...args)rejects, the promise chain doesn't handle the error, leaving the token inactiveRequestwithout caching a result. While subsequent requests will increment the token, consider adding error handling for completeness:return fun(...args).then((result) => { if (activeRequest.get(key) === token) { cache.set(key, { data: result, savedAt: Date.now() }); } return result; +}).catch((error) => { + // Token cleanup or error logging if needed + throw error; });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/common/createPreloader.ts(1 hunks)
🔇 Additional comments (4)
src/common/createPreloader.ts (4)
8-9: Good approach to isolate cached data per argument set.The map-based cache and per-key active request tracking directly address the race condition described in issue #202, where concurrent profile loads could mix data from different users.
30-33: LGTM: Preload function correctly implements fire-and-forget pattern.The
preloadfunction appropriately debounces calls and doesn't return a promise, which is correct for its use case. Callers who need the result should userun()directly.
38-56: LGTM: Preloader instances correctly instantiated.Both
userDetailsPreloaderandmessagesPreloaderare properly configured. The messagesPreloader correctly handles the store initialization and message fetching with appropriate side effects.
12-12: JSON.stringify cache key approach is appropriate for current and expected usage.Verification confirms all call sites pass only ID values (user IDs or channel IDs), which are primitives that serialize reliably with
JSON.stringify. The edge cases mentioned (circular references, undefined values, non-serializable types) do not apply to the actual arguments being passed.No issues found.
| const cache = new Map<string, { data: T; savedAt: number }>(); | ||
| const activeRequest = new Map<string, number>(); | ||
|
|
||
| const run = (...args: U) => { | ||
| const newArgsStr = JSON.stringify(args); | ||
|
|
||
| if (argsStr !== newArgsStr) { | ||
| waiting = []; | ||
| argsStr = newArgsStr; | ||
| data = null; | ||
| const key = JSON.stringify(args); | ||
| const now = Date.now(); | ||
| const cached = cache.get(key); | ||
| if (cached && now - cached.savedAt < 10000) { | ||
| return Promise.resolve(cached.data); | ||
| } | ||
|
|
||
| return new Promise<T>((resolve) => { | ||
| if (data && argsStr === newArgsStr) { | ||
| if (Date.now() - dataSavedAt! < 10000) { | ||
| resolve(data); | ||
| return; | ||
| } | ||
| data = null; | ||
| } | ||
| if (waiting.length) { | ||
| waiting.push(resolve); | ||
| return; | ||
| } | ||
| waiting.push(resolve); | ||
| const token = (activeRequest.get(key) || 0) + 1; | ||
| activeRequest.set(key, token); | ||
|
|
||
| fun(...args).then((newData) => { | ||
| data = newData; | ||
| dataSavedAt = Date.now(); | ||
| if (argsStr !== newArgsStr) return; | ||
| waiting.forEach((resolve) => resolve(newData)); | ||
| waiting = []; | ||
| }); | ||
| return fun(...args).then((result) => { | ||
| if (activeRequest.get(key) === token) { | ||
| cache.set(key, { data: result, savedAt: Date.now() }); | ||
| } | ||
| return result; | ||
| }); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Implement cache eviction to prevent unbounded memory growth.
Both cache and activeRequest maps grow indefinitely, which can cause memory leaks in long-running web applications. Consider implementing:
- Maximum cache size with LRU eviction
- Periodic cleanup of expired cache entries
- Cleanup of
activeRequestentries after completion
Example cache cleanup:
const run = (...args: U) => {
const key = JSON.stringify(args);
const now = Date.now();
// Clean expired entries periodically
if (cache.size > 100) {
for (const [k, v] of cache.entries()) {
if (now - v.savedAt >= 10000) {
cache.delete(k);
activeRequest.delete(k);
}
}
}
// ... rest of implementation
};🤖 Prompt for AI Agents
In src/common/createPreloader.ts around lines 8 to 28, the cache and
activeRequest maps grow unbounded causing memory leaks; implement bounded cache
size with LRU eviction, periodic cleanup of expired entries, and ensure
activeRequest entries are removed when a request finishes. Add a maxEntries
constant (e.g. 100), track access order (or use a Map where recently used keys
are moved to the end) and evict oldest entries when cache.size > maxEntries; on
each run call perform a quick expired-entry sweep (only when size exceeds
threshold or on a timer) that removes entries older than the TTL and also
deletes their activeRequest entries; finally, after fun(...args) resolves or
rejects, always remove the key from activeRequest (and only set cache when the
token matches) so activeRequest cannot grow indefinitely.
What does this PR do?
Refactored the preloader system to handle caching per argument set and track active requests with tokens. Preventing stale/out-of-order responses from overwriting the UI.
The old preloader used a single global cache and waiting queue, which could cause the wrong profile to display if multiple user profiles were loaded quickly. By switching to a map-based cache using arguments and per-request tokens, each user's data is isolated and concurrent requests are properly handled.
Did you test your code?
Yes! As I always do <3
Does your PR contain small, easy to understand changes?
The answer to this is subjective, so.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.