-
Notifications
You must be signed in to change notification settings - Fork 910
Support S3TransferManager upload using Flux<ByteBuffer> #2714
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@zoewangg @debora-ito |
Thank you for reaching out @pkgonan, feature request acknowledged. |
Hi @pkgonan. I'll be taking a look at this feature request. It may entail backwards incompatible API changes as we try to settle on the correct API (since TransferManager is still in PREVIEW). I'll include you on the pull request once it's ready. |
This has been resolved with #2817. Thanks for the feature request, @pkgonan! As mentioned in the PR, you should be able to adapt a On the response/download side, this is currently a little bit more complicated. You would need to implement your own |
|
@pkgonan did you ever get this to work? "AsyncRequestBody.fromPublisher(Flux)" Still complains about content length if I try to do something like this with latest transferManager in v2.17.210-PREVIEW |
@djchapm what error did you get? Make sure you have contentLength specified in |
Copying this from issue 34 in v1... based on what I'm seeing in the v2 transfer manager preview and #2817 and #2814 ... Couldn't get it to work - same issue reported based on having content-size up front. Digging into the libraries - the fromPublisher ends up constructing a meta request that hardcodes content size to zero - so you'd think this would be intended for a publisher style interface. Here's my code and versions of everything for the test - if we could get this to work then I think most everyone's related feature request would be resolved.
Output in console:
|
Can you try specifying contentLength in PutObjectRequest? |
Hi @zoewangg - thanks for your prompt discussion on this! Yes that works... but doesn't solve the main feature request I think. Streaming uploads - heavily requested across a number of tickets. I thought that this ticket, #2817, #2814, and your efforts on the new TransferManager were all working towards uploading from a 'publisher' where the size is not known until publisher.onComplete. See the code referenced in #2817 about 'Arbitrary Object Transfers". There it's a request 'fromString', however the putObjectRequest does not specify a contentLength - additionally @Bennett-Lynch mentions later that the async fromPublisher should adapt to a Flux.... Is it expected then that even for this method of using a publisher - the input size needs to be known ahead of time? |
Yeah, using We don't have a timeline for this feature at the moment. Could you create a feature request? https://github.com/aws/aws-sdk-java-v2/issues/new/choose It will help prioritization if there are more 👍🏼 on the feature request. |
Thanks @zoewangg - but quick question on process... you have issue #37 tracking requested features for Transfer Manager in v2 - the highest voted feature request there is 474 from v1 - which is what we're talking about here - streaming upload to S3 with unknown size. It's also the main goal of feature request #139 in v2, which was closed in favor of tracking/prioritizing in above ticket #37. So there are requests to cover this - I imagine if another was created, it would remove at least the history and severity of the request that these older ones have (2015 and 2017). Also - would be great if there were some action on #37 since other tickets point to it and are closed in favor of it. Requests for streaming uploads without knowing size ahead of time - which is typical of any high performance streaming application - has been around and getting requests since ~2015 - that's 7 years... if it hasn't made implementation now despite all the upvotes and complaints, is there some motivation by AWS to specifically NOT implement this feature? I'm wondering if it's a cost issue. There are higher costs with allocating desk etc to do file based uploads in something like a kubernetes environment - so such a feature will mostly benefit clients removing the need for disk and thereby removing costs. I'm reaching here, just trying to understand why since an obvious service/API wouldn't be supplied as the primary API for S3. In reviewing the comments - I'm not the only one who is having a hard time understanding this. If it's simply not going to get prioritized ever - then a description/response with reasonable explanation might help us all from pushing/waiting/complaining about it. |
@djchapm thank you for the candid feedback. To clarify, this feature is on our radar and we do have plans to support it, however, it’s not going to be included as part of GA release. Our primary goal for Transfer Manger right now is to reach to the point where customers can migrate v1 Transfer Manager to v2. Once we achieve that, we can start to tackle new features such as this one(btw, we are hiring 🙂 #3156) To answer your question on the process, #37 was created initially to track all feature requests for v2 transfer manager, and now that we have narrowed down the GA features, I think we should create separate issues for non-GA features. In hindsight, we should’ve made our plan more clear in #37. I was not aware of #139, and that’s why I suggested opening a new issue. I’ll go ahead and re-open #139 to track this specific feature. |
…9e9f46e1e Pull request: release <- staging/1aa9ea38-9161-4d55-b739-f469e9f46e1e
…9e9f46e1e Pull request: release <- staging/1aa9ea38-9161-4d55-b739-f469e9f46e1e
Describe the Feature
Support S3TransferManager upload using Flux.
Consider changing UploadRequest's source File to Flux.
Additional Context
aws-sdk-java-v2/services-custom/s3-transfer-manager/src/main/java/software/amazon/awssdk/transfer/s3/UploadRequest.java
Line 39 in ca8aa92
aws-sdk-java-v2/services-custom/s3-transfer-manager/src/main/java/software/amazon/awssdk/transfer/s3/internal/DefaultS3TransferManager.java
Line 91 in df8b220
The text was updated successfully, but these errors were encountered: