Skip to content

Possible memory leak? After cold start, lambda keeps consuming memory #972

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
nicoan opened this issue Mar 24, 2025 · 1 comment
Open

Comments

@nicoan
Copy link

nicoan commented Mar 24, 2025

Hello,

Working on some lambdas I noticed that the memory consumption keeps incrementing and never goes down. I made several tests all of them threw more or less the same results. Here's the code I used for testing:

use lambda_http::{service_fn, Error, Request};

pub async fn function_handler(_: Request) -> Result<&'static str, Error> {
    Ok("")
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    lambda_http::run(service_fn(function_handler)).await
}

Cargo.toml

[package]
name = "test_leak"
version = "0.1.0"
edition = "2021"

[dependencies]
lambda_http = "0.14.0"
tokio = { version = "1", features = ["macros"] }

To run the tests, I used an script that calls an API Gateway endpoint that calls the lambda:

#!/bin/bash
for ((i=1; i<=5000; i++)); do
    curl -i -X POST <api_g_url> -H 'Content-Type: application/json' --header 'Content-Length: 6' -d '"test"'
done

Here are the graphs of the memory consumption from the tests:

Image
Image
Image
Image

@jlizen
Copy link

jlizen commented May 4, 2025

A few questions:

  1. Is the memory growing without bounds to the point where the lambda is crashing? IE, are we sure this isn't just the allocator holding onto memory and not releasing it to the system, even if isn't actively used? Since, these metrics would reflect all memory held by the process, not just memory that is in active use.

  2. Does the same behavior appear locally when run with cargo lambda? Presumably it should, since the runtime is unchanged, just the outside orchestrator is different? Heap profiling would be much simpler in your local than cargo lambda.

  3. What happens when you switch the allocator from system allocator to jemalloc?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants