Skip to content

Conversation

@Asraye
Copy link
Contributor

@Asraye Asraye commented Nov 20, 2025

What does this PR do?

Refactored the preloader system to handle caching per argument set and track active requests with tokens. Preventing stale/out-of-order responses from overwriting the UI.

The old preloader used a single global cache and waiting queue, which could cause the wrong profile to display if multiple user profiles were loaded quickly. By switching to a map-based cache using arguments and per-request tokens, each user's data is isolated and concurrent requests are properly handled.

Did you test your code?

Yes! As I always do <3

Does your PR contain small, easy to understand changes?

The answer to this is subjective, so.

Summary by CodeRabbit

  • Performance
    • Improved preloading efficiency through in-memory caching and request deduplication. Cached results are returned within a 10-second window, reducing resource usage and improving response times for repeated operations.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 20, 2025

Walkthrough

This change refactors the preloading utility to implement in-memory caching with a 10-second validity window and request deduplication using JSON-serialized argument keys. A token-based mechanism tracks active requests, ensuring only the latest call updates the cache after async completion, while the public API remains unchanged.

Changes

Cohort / File(s) Summary
Preloader caching and deduplication
src/common/createPreloader.ts
Added in-memory cache map and activeRequest token tracking to deduplicate requests by JSON-serialized args. Introduced 10-second cache validity window. Replaced manual debounce/state tracking with token-based result caching. Moved preload definition after run. Public API surface unchanged.

Sequence Diagram

sequenceDiagram
    participant Caller
    participant Preloader
    participant Cache
    participant AsyncFn as Async Function

    rect rgba(100, 200, 150, 0.2)
        note right of Caller: New: Cache Hit (within 10s window)
        Caller->>Preloader: run(args)
        Preloader->>Cache: check cache[argsKey]
        Cache-->>Preloader: return cached result
        Preloader-->>Caller: Promise.resolve(cached)
    end

    rect rgba(150, 180, 220, 0.2)
        note right of Caller: New: Cache Miss or Deduplication
        Caller->>Preloader: run(args)
        Preloader->>Cache: check cache[argsKey]
        Cache-->>Preloader: miss or expired
        Preloader->>Preloader: issue activeRequest token
        par Concurrent Calls with Same Args
            Caller->>Preloader: run(same args)
            Preloader->>Preloader: reuse activeRequest token
        end
        Preloader->>AsyncFn: invoke actual async function
        AsyncFn-->>Preloader: result
        Preloader->>Cache: store result with token
        Preloader-->>Caller: Promise with result
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Cache key generation: Verify JSON serialization of args handles all types correctly and avoids collisions
  • Token-based deduplication logic: Ensure the mechanism properly handles concurrent requests and only the latest call caches results
  • Cache validity window (10s): Confirm the TTL duration is appropriate for the use case (likely user profile fetches, based on linked issue)
  • Edge cases: Review behavior for null/undefined args, failed async calls, and cache invalidation scenarios

Poem

🐰 A cache was born from chaos deep,
Where profiles mixed in sleep,
Now dedupe tokens guard the way,
Ten seconds fresh, no more fray!
hops excitedly 🥕✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'refactor: fix preloader causing incorrect profile popout' is specific and directly related to the main change, though slightly verbose.
Description check ✅ Passed The description covers all template sections with relevant details: what the PR does, testing confirmation, and acknowledges the complexity of changes.
Linked Issues check ✅ Passed The code changes implement caching per argument set and token-based request tracking to prevent stale responses from overwriting UI, directly addressing issue #202's requirement to isolate user data and prevent mixed profile displays.
Out of Scope Changes check ✅ Passed All changes are focused on refactoring the preloader system to fix the profile popout caching issue; no unrelated modifications are present.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Asraye Asraye changed the title refactor: improve consistency and cache refactor: fix preloader causing incorrect profile popout Nov 20, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/common/createPreloader.ts (1)

22-27: Add error handling to prevent token leaks.

If fun(...args) rejects, the promise chain doesn't handle the error, leaving the token in activeRequest without caching a result. While subsequent requests will increment the token, consider adding error handling for completeness:

 return fun(...args).then((result) => {
   if (activeRequest.get(key) === token) {
     cache.set(key, { data: result, savedAt: Date.now() });
   }
   return result;
+}).catch((error) => {
+  // Token cleanup or error logging if needed
+  throw error;
 });
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bc3e271 and c2cb203.

📒 Files selected for processing (1)
  • src/common/createPreloader.ts (1 hunks)
🔇 Additional comments (4)
src/common/createPreloader.ts (4)

8-9: Good approach to isolate cached data per argument set.

The map-based cache and per-key active request tracking directly address the race condition described in issue #202, where concurrent profile loads could mix data from different users.


30-33: LGTM: Preload function correctly implements fire-and-forget pattern.

The preload function appropriately debounces calls and doesn't return a promise, which is correct for its use case. Callers who need the result should use run() directly.


38-56: LGTM: Preloader instances correctly instantiated.

Both userDetailsPreloader and messagesPreloader are properly configured. The messagesPreloader correctly handles the store initialization and message fetching with appropriate side effects.


12-12: JSON.stringify cache key approach is appropriate for current and expected usage.

Verification confirms all call sites pass only ID values (user IDs or channel IDs), which are primitives that serialize reliably with JSON.stringify. The edge cases mentioned (circular references, undefined values, non-serializable types) do not apply to the actual arguments being passed.

No issues found.

Comment on lines +8 to 28
const cache = new Map<string, { data: T; savedAt: number }>();
const activeRequest = new Map<string, number>();

const run = (...args: U) => {
const newArgsStr = JSON.stringify(args);

if (argsStr !== newArgsStr) {
waiting = [];
argsStr = newArgsStr;
data = null;
const key = JSON.stringify(args);
const now = Date.now();
const cached = cache.get(key);
if (cached && now - cached.savedAt < 10000) {
return Promise.resolve(cached.data);
}

return new Promise<T>((resolve) => {
if (data && argsStr === newArgsStr) {
if (Date.now() - dataSavedAt! < 10000) {
resolve(data);
return;
}
data = null;
}
if (waiting.length) {
waiting.push(resolve);
return;
}
waiting.push(resolve);
const token = (activeRequest.get(key) || 0) + 1;
activeRequest.set(key, token);

fun(...args).then((newData) => {
data = newData;
dataSavedAt = Date.now();
if (argsStr !== newArgsStr) return;
waiting.forEach((resolve) => resolve(newData));
waiting = [];
});
return fun(...args).then((result) => {
if (activeRequest.get(key) === token) {
cache.set(key, { data: result, savedAt: Date.now() });
}
return result;
});
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Implement cache eviction to prevent unbounded memory growth.

Both cache and activeRequest maps grow indefinitely, which can cause memory leaks in long-running web applications. Consider implementing:

  1. Maximum cache size with LRU eviction
  2. Periodic cleanup of expired cache entries
  3. Cleanup of activeRequest entries after completion

Example cache cleanup:

const run = (...args: U) => {
  const key = JSON.stringify(args);
  const now = Date.now();
  
  // Clean expired entries periodically
  if (cache.size > 100) {
    for (const [k, v] of cache.entries()) {
      if (now - v.savedAt >= 10000) {
        cache.delete(k);
        activeRequest.delete(k);
      }
    }
  }
  
  // ... rest of implementation
};
🤖 Prompt for AI Agents
In src/common/createPreloader.ts around lines 8 to 28, the cache and
activeRequest maps grow unbounded causing memory leaks; implement bounded cache
size with LRU eviction, periodic cleanup of expired entries, and ensure
activeRequest entries are removed when a request finishes. Add a maxEntries
constant (e.g. 100), track access order (or use a Map where recently used keys
are moved to the end) and evict oldest entries when cache.size > maxEntries; on
each run call perform a quick expired-entry sweep (only when size exceeds
threshold or on a timer) that removes entries older than the TTL and also
deletes their activeRequest entries; finally, after fun(...args) resolves or
rejects, always remove the key from activeRequest (and only set cache when the
token matches) so activeRequest cannot grow indefinitely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Profile pop-outs randomly mashed up

1 participant