-
Notifications
You must be signed in to change notification settings - Fork 945
Description
Describe the bug
When uploading large files using the CRT s3 client, it seems that Kubernetes memory restrictions are not respected.
When uploading a large file using the code snippet in #4033 in a Kubernetes pod, a number of anonymous file read processess are spawned and don't respect the memory limits on the pod, trying to use more memory than the pod is allowed to.
Expected Behavior
The SDK upload processes should not consume all available RAM in the pod, and respect Kubernetes limits.
Current Behavior
The pod consumes more and more memory for each upload created, rarely freeing memory. Sooner or later, Kubernetes kills the pod for using too many resources. The provided screenshot shows a pod with an 8GB memory limit:
Reproduction Steps
use the transfer manager created in the snippet provided in #4033 to upload a large file, roughly 80-100GB in a Kubernetes pod.
Possible Solution
No response
Additional Information/Context
No response
AWS Java SDK version used
2.20.67
JDK version used
11
Operating System and version
Ubuntu jammy jellyfish