feat: Optimize metadata-storage build with HashMap caching for O(1) lookups #1779
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Performance Optimization: Map-based Caches for Metadata Class Building
Problem
Our schema generation process was becoming exponentially slower as our schema grew, taking over a minute to complete in some cases. With our large schema containing ClassMetadata arrays ranging from 3,000 to 15,000 objects, the repeated
.find()
and.filter()
operations within forEach loops were creating significant performance bottlenecks.Solution
Implemented map-based "caches" during the metadata class build process to replace inefficient array operations. These caches are added as new variables on the MetadataStorage class, while preserving list-based attributes where required by existing components.
Impact
This optimization has dramatically improved performance as demonstrated by the following benchmarks:
before (using O(n) lookups)
after (using O(1) lookups)
Implementation Notes
.find()
and.filter()
operations within loop contexts