-
Notifications
You must be signed in to change notification settings - Fork 53
Description
As of #2418, uploading an image is done by posting the whole image in 512kb chunks, keyed by offset. That's done in the CLI in oxidecomputer/oxide.rs#109. I am working on doing something similar in the web console: oxidecomputer/console#1453.
The responsibility for doing the chunking and making all those calls is offloaded to the client. While making 500 or 1000 network round trips from the browser or CLI is not ideal, this is a reasonable strategy and we have gone for make-the-client-do-it in other contexts as well.
However, the StreamingBody
extractor added in oxidecomputer/dropshot#617 seems like a really good fit for this. It takes a single streaming body and does precisely the chunking that we're doing on the client-side. So the client would make a single streaming request with the file, and Nexus is able to loop through chunks and call disk_manual_import
on each. See BufList chunking example in Dropshot.
Lines 463 to 464 in 23f996a
/// Bulk write some bytes into a disk that's in state ImportingFromBulkWrites | |
pub async fn disk_manual_import( |
Obstacles
Max request size
Streaming bodies are subject to our global configured max request body size, which is going to be too small for images, which you can expect to be measured in GiB. @sunshowers has oxidecomputer/dropshot#618 to allow per-endpoint configuration of this maximum, but hasn't been able to get them over the line due to more urgent work. Bumping the global max to a huge number is not a great interim solution.
This is new work and we already have something that works
I don't want to push this too hard for this reason. Just want to record my findings and have a good plan for followup work. I'm going to start by implementing the console side using the many POSTs and we'll see how it goes.