Skip to content

feat: Optimize metadata-storage build with HashMap caching for O(1) lookups #1779

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

benminer
Copy link

@benminer benminer commented Apr 18, 2025

Performance Optimization: Map-based Caches for Metadata Class Building

Problem

Our schema generation process was becoming exponentially slower as our schema grew, taking over a minute to complete in some cases. With our large schema containing ClassMetadata arrays ranging from 3,000 to 15,000 objects, the repeated .find() and .filter() operations within forEach loops were creating significant performance bottlenecks.

Solution

Implemented map-based "caches" during the metadata class build process to replace inefficient array operations. These caches are added as new variables on the MetadataStorage class, while preserving list-based attributes where required by existing components.

Impact

This optimization has dramatically improved performance as demonstrated by the following benchmarks:
before (using O(n) lookups)

buildClassMetadata: 31.353s
buildFieldResolverMetadata: 313.714ms
buildResolversMetadata: 1.429s

after (using O(1) lookups)

buildClassMetadata: 268.772ms
buildFieldResolverMetadata: 38.593ms
buildResolversMetadata: 126.305ms

Implementation Notes

  • Maintained backward compatibility with components expecting list attributes
  • Primary focus was eliminating costly .find() and .filter() operations within loop contexts
  • Maps provide O(1) lookup time compared to O(n) for array operations

@benminer benminer requested a review from MichalLytek as a code owner April 18, 2025 16:44
@MichalLytek MichalLytek added Enhancement 🆕 New feature or request Community 👨‍👧 Something initiated by a community labels Apr 19, 2025
@benminer
Copy link
Author

benminer commented Apr 22, 2025

@MichalLytek Thanks for taking a look at this, tests should be fixed. The root issue was a lack of resetting the new cache state in the global metadata-storage instance, and I also ran into some weirdness of needing to reset the state in beforeEach blocks rather than beforeAll - state was bleeding into other tests leading to failures, but would pass when run in isolation.

@benminer
Copy link
Author

benminer commented May 6, 2025

@MichalLytek Bumping this, would love to get it merged 😄

@MichalLytek
Copy link
Owner

I promise I will try to find some time to review it 😉

Copy link

codecov bot commented May 7, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.59%. Comparing base (be33fd4) to head (800ffa3).
Report is 77 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1779      +/-   ##
==========================================
+ Coverage   95.50%   95.59%   +0.08%     
==========================================
  Files         113      114       +1     
  Lines        1847     1929      +82     
  Branches      364      366       +2     
==========================================
+ Hits         1764     1844      +80     
- Misses         83       85       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Community 👨‍👧 Something initiated by a community Enhancement 🆕 New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants