Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem: Memory Management Could be Better Designed #86

Open
michael-okeefe opened this issue Aug 23, 2024 · 0 comments
Open

Problem: Memory Management Could be Better Designed #86

michael-okeefe opened this issue Aug 23, 2024 · 0 comments
Labels
enhancement New feature or request low-priority "Nice to have" but not necessary; prioritize lower performance A task related to assessing/enhancing performance
Milestone

Comments

@michael-okeefe
Copy link
Member

Problem Description

The current memory management strategy within ERIN is quite ad hoc and relies entirely upon the std::vector memory characteristics. Although we attempt to reserve vector space when we know the counts, we believe it would be good to investigate how alternate memory management strategies could affect the performance.

In particular, we think that a single pre-allocated memory arena which includes a per-time-step "scratch" area might be a great way to reduce multiple potential behind-the-scenes mallocs and reallocs.

Together with this item, I would like to suggest exploring the following:

  • try running existing examples with an overallocated single memory arena large enough for all simulation memory needs and time that to prove that this will even make a difference
  • potentially couple this with more clever examination of the TOML input file: for example, should be able to scan the file to count the number of different items that will be coming in along with their types to know exactly how much memory we'll actually need for simulation. Other than scratch memory (i.e., temporarily allocated collections), the only variable memory we have is the number of events per scenario occurrence simulation. I believe that most everything else can be pre-calculated for memory requirements after the toml file is read
  • note: if this looks lucrative, we could also potentially examine changes to the TOML parser and TOML input file format to make determination of the counts and types of "objects" (components, distributions, scenarios, etc) easier to determine. Also, the TOML parser itself is a 3rd party library and we may not have access to change its memory management -- if there were truly a benefit, we could even examine the TOML parser itself as what we need to do is far less sophisticated than what the current 3rd party library does.
@michael-okeefe michael-okeefe added enhancement New feature or request low-priority "Nice to have" but not necessary; prioritize lower performance A task related to assessing/enhancing performance labels Aug 23, 2024
@michael-okeefe michael-okeefe added this to the 2024 (Year End) milestone Aug 23, 2024
@michael-okeefe michael-okeefe modified the milestones: 2024 (Year End), 2025+ Jan 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request low-priority "Nice to have" but not necessary; prioritize lower performance A task related to assessing/enhancing performance
Projects
None yet
Development

No branches or pull requests

1 participant