You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Working on some lambdas I noticed that the memory consumption keeps incrementing and never goes down. I made several tests all of them threw more or less the same results. Here's the code I used for testing:
use lambda_http::{service_fn,Error,Request};pubasyncfnfunction_handler(_:Request) -> Result<&'staticstr,Error>{Ok("")}#[tokio::main]asyncfnmain() -> Result<(),Error>{
lambda_http::run(service_fn(function_handler)).await}
Cargo.toml
[package]
name = "test_leak"version = "0.1.0"edition = "2021"
[dependencies]
lambda_http = "0.14.0"tokio = { version = "1", features = ["macros"] }
To run the tests, I used an script that calls an API Gateway endpoint that calls the lambda:
Is the memory growing without bounds to the point where the lambda is crashing? IE, are we sure this isn't just the allocator holding onto memory and not releasing it to the system, even if isn't actively used? Since, these metrics would reflect all memory held by the process, not just memory that is in active use.
Does the same behavior appear locally when run with cargo lambda? Presumably it should, since the runtime is unchanged, just the outside orchestrator is different? Heap profiling would be much simpler in your local than cargo lambda.
What happens when you switch the allocator from system allocator to jemalloc?
Hello,
Working on some lambdas I noticed that the memory consumption keeps incrementing and never goes down. I made several tests all of them threw more or less the same results. Here's the code I used for testing:
Cargo.toml
To run the tests, I used an script that calls an API Gateway endpoint that calls the lambda:
Here are the graphs of the memory consumption from the tests:
The text was updated successfully, but these errors were encountered: