Skip to content

Possible memory leak? After cold start, lambda keeps consuming memory #972

Open
@nicoan

Description

@nicoan

Hello,

Working on some lambdas I noticed that the memory consumption keeps incrementing and never goes down. I made several tests all of them threw more or less the same results. Here's the code I used for testing:

use lambda_http::{service_fn, Error, Request};

pub async fn function_handler(_: Request) -> Result<&'static str, Error> {
    Ok("")
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    lambda_http::run(service_fn(function_handler)).await
}

Cargo.toml

[package]
name = "test_leak"
version = "0.1.0"
edition = "2021"

[dependencies]
lambda_http = "0.14.0"
tokio = { version = "1", features = ["macros"] }

To run the tests, I used an script that calls an API Gateway endpoint that calls the lambda:

#!/bin/bash
for ((i=1; i<=5000; i++)); do
    curl -i -X POST <api_g_url> -H 'Content-Type: application/json' --header 'Content-Length: 6' -d '"test"'
done

Here are the graphs of the memory consumption from the tests:

Image
Image
Image
Image

Activity

jlizen

jlizen commented on May 4, 2025

@jlizen
Contributor

A few questions:

  1. Is the memory growing without bounds to the point where the lambda is crashing? IE, are we sure this isn't just the allocator holding onto memory and not releasing it to the system, even if isn't actively used? Since, these metrics would reflect all memory held by the process, not just memory that is in active use.

  2. Does the same behavior appear locally when run with cargo lambda? Presumably it should, since the runtime is unchanged, just the outside orchestrator is different? Heap profiling would be much simpler in your local than cargo lambda.

  3. What happens when you switch the allocator from system allocator to jemalloc?

nicoan

nicoan commented on May 22, 2025

@nicoan
Author

Hello @jlizen

  1. I never got to the point where the lambda crashed. I made two kind of tests:
  • Hitting it continously for 5, 10 and 15 min without pause: here the memeory continously grew
  • Do a little pause and hit it again continously: this was to prevent a cold start, the memory started grewing again from the last point, the lambda did not free the memory.

Unfortunately I no longer have access to an AWS account to continue this kind of tests.

Regarding 2 I did not try that. I'll try to profile it with jemalloc to try to figure out what is happening

jlizen

jlizen commented on May 22, 2025

@jlizen
Contributor

the memory continuously grew

Ah, I see, in the charts it looked like it was plateauing after an initial increase. If it continues growing further with subsequent executions, that is indeed suspicious.

nicoan

nicoan commented on May 22, 2025

@nicoan
Author

the memory continuously grew

Ah, I see, in the charts it looked like it was plateauing after an initial increase. If it continues growing further with subsequent executions, that is indeed suspicious.

Yes! It plateaus, then spike, then plateau, then spike, but it always goes up. The plateau intervals are not regular, that is also strange

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @jlizen@nicoan

        Issue actions

          Possible memory leak? After cold start, lambda keeps consuming memory · Issue #972 · awslabs/aws-lambda-rust-runtime