-
Notifications
You must be signed in to change notification settings - Fork 29
Enforce storage quota before upload #8385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enforce storage quota before upload #8385
Conversation
📝 WalkthroughWalkthroughThis pull request enhances the dataset upload process by incorporating the total file size into storage quota validations. The frontend calculates the total file size of the files to be uploaded and passes this information to the backend. Several backend components, including the reserve and upload methods across multiple controllers and services, have been updated to consider the incoming upload size when validating available storage. Additionally, a new error message is introduced to handle cases where the upload exceeds the reserved space. Changes
Assessment against linked issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (1)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
@fm3 Would you please take a look at this PR? The changes in the frontend are very minimal right now. So it should be ok if no frontend dev takes a look at it imo. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
app/controllers/WKRemoteDataStoreController.scala
(1 hunks)conf/messages
(1 hunks)frontend/javascripts/admin/dataset/dataset_upload_view.tsx
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala
(3 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/ComposeService.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala
(5 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: circleci_build
🔇 Additional comments (9)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/ComposeService.scala (1)
64-64
: LGTM!The addition of
None
for thetotalFileSize
parameter is correct, as this service is used for composing datasets rather than direct uploads.conf/messages (1)
113-113
: LGTM!The error message is clear and provides helpful guidance to users when they exceed their reserved upload size.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (3)
38-38
: LGTM!The addition of
totalFileSize
field toReserveUploadInformation
is appropriate.
129-132
: LGTM!The Redis key definitions for tracking total and current upload sizes follow the existing naming convention.
651-652
: LGTM!The cleanup of Redis keys in
removeFromRedis
is complete and correct.webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (2)
125-125
: LGTM! Added totalFileSize parameter for storage quota validation.The addition of the
totalFileSize
parameter in theReserveUploadInformation
constructor aligns with the PR's objective to enforce storage quota before upload.
417-417
: LGTM! Added totalFileSize parameter for storage quota validation.The addition of the
totalFileSize
parameter in theReserveUploadInformation
constructor is consistent with the changes in the manual upload method.frontend/javascripts/admin/dataset/dataset_upload_view.tsx (2)
319-319
: LGTM! Correctly calculates total file size.The implementation uses
reduce
to calculate the total size of all files, which is an efficient approach.
328-328
: LGTM! Added totalFileSize to upload information.The calculated total file size is correctly included in the
reserveUploadInformation
object sent to the backend.
usedStorageBytes <- organizationDAO.getUsedStorage(organization._id) | ||
_ <- Fox.runOptional(organization.includedStorageBytes)(includedStorage => | ||
bool2Fox(usedStorageBytes <= includedStorage)) ?~> "dataset.upload.storageExceeded" ~> FORBIDDEN | ||
bool2Fox(usedStorageBytes + uploadInfo.totalFileSize.getOrElse(0L) <= includedStorage)) ?~> "dataset.upload.storageExceeded" ~> FORBIDDEN |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider requiring totalFileSize for uploads.
The current implementation uses getOrElse(0L)
which could silently pass the quota check if totalFileSize
is None. Consider making totalFileSize
a required field for uploads to prevent accidentally bypassing the quota check.
-bool2Fox(usedStorageBytes + uploadInfo.totalFileSize.getOrElse(0L) <= includedStorage)
+uploadInfo.totalFileSize match {
+ case Some(size) => bool2Fox(usedStorageBytes + size <= includedStorage)
+ case None => Fox.failure("dataset.upload.totalFileSizeRequired")
+}
Also add this message to conf/messages:
dataset.upload.totalFileSizeRequired=Total file size must be provided for uploads to enforce storage quota.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
bool2Fox(usedStorageBytes + uploadInfo.totalFileSize.getOrElse(0L) <= includedStorage)) ?~> "dataset.upload.storageExceeded" ~> FORBIDDEN | |
uploadInfo.totalFileSize match { | |
case Some(size) => bool2Fox(usedStorageBytes + size <= includedStorage) | |
case None => Fox.failure("dataset.upload.totalFileSizeRequired") | |
} ?~> "dataset.upload.storageExceeded" ~> FORBIDDEN |
totalFileSize <- runningUploadMetadataStore.find(redisKeyForTotalFileSize(uploadId)) | ||
_ <- Fox.runOptional(totalFileSize) { maxFileSize => | ||
runningUploadMetadataStore | ||
.find(redisKeyForCurrentUploadedTotalFileSize(uploadId)) | ||
.flatMap(alreadyUploadedAmountOfBytes => | ||
bool2Fox(alreadyUploadedAmountOfBytes.getOrElse("0").toLong + chunkSize <= maxFileSize.toLong)) | ||
} ?~> "dataset.upload.moreBytesThanReserved" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix potential race condition in size validation.
The current implementation has a race condition between checking the total size and updating it. Multiple concurrent uploads could each pass the check but collectively exceed the limit.
-totalFileSize <- runningUploadMetadataStore.find(redisKeyForTotalFileSize(uploadId))
-_ <- Fox.runOptional(totalFileSize) { maxFileSize =>
- runningUploadMetadataStore
- .find(redisKeyForCurrentUploadedTotalFileSize(uploadId))
- .flatMap(alreadyUploadedAmountOfBytes =>
- bool2Fox(alreadyUploadedAmountOfBytes.getOrElse("0").toLong + chunkSize <= maxFileSize.toLong))
-} ?~> "dataset.upload.moreBytesThanReserved"
+_ <- runningUploadMetadataStore.withLock(s"upload_size_check_$uploadId") {
+ for {
+ totalFileSize <- runningUploadMetadataStore.find(redisKeyForTotalFileSize(uploadId))
+ _ <- Fox.runOptional(totalFileSize) { maxFileSize =>
+ for {
+ currentSize <- runningUploadMetadataStore.find(redisKeyForCurrentUploadedTotalFileSize(uploadId))
+ newSize = currentSize.getOrElse("0").toLong + chunkSize
+ _ <- bool2Fox(newSize <= maxFileSize.toLong) ?~> "dataset.upload.moreBytesThanReserved"
+ _ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(uploadId),
+ String.valueOf(newSize))
+ } yield ()
+ }
+ } yield ()
+}
Also add a method to DataStoreRedisStore:
def withLock[T](key: String)(f: => Fox[T]): Fox[T] = {
// Implement Redis-based distributed locking
// Consider using Redisson or other battle-tested implementations
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that might actually be a thing. Will take a look at it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MichaelBuessemeyer Thanks for considering this feedback! Let me know if you need any help with implementing the distributed locking solution.
(_/)
(^.^)
(> <) Thanks for the feedback!
✏️ Learnings added
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8385
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala:256-262
Timestamp: 2025-02-07T09:22:54.315Z
Learning: The size validation in UploadService should use distributed locking to prevent race conditions between checking the total size and updating it, as multiple concurrent uploads could collectively exceed the limit.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
_ <- Fox.runOptional(reserveUploadInfo.totalFileSize) { fileSize => | ||
Fox.combined( | ||
List( | ||
runningUploadMetadataStore.insert(redisKeyForTotalFileSize(reserveUploadInfo.uploadId), | ||
String.valueOf(fileSize)), | ||
runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0") | ||
)) | ||
} | ||
_ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix duplicate initialization of currentUploadedTotalFileSize.
The currentUploadedTotalFileSize
is initialized twice:
- Inside the
Fox.runOptional(reserveUploadInfo.totalFileSize)
block - Immediately after the block
_ <- Fox.runOptional(reserveUploadInfo.totalFileSize) { fileSize =>
Fox.combined(
List(
runningUploadMetadataStore.insert(redisKeyForTotalFileSize(reserveUploadInfo.uploadId),
String.valueOf(fileSize)),
runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0")
))
}
-_ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
_ <- Fox.runOptional(reserveUploadInfo.totalFileSize) { fileSize => | |
Fox.combined( | |
List( | |
runningUploadMetadataStore.insert(redisKeyForTotalFileSize(reserveUploadInfo.uploadId), | |
String.valueOf(fileSize)), | |
runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0") | |
)) | |
} | |
_ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0") | |
_ <- Fox.runOptional(reserveUploadInfo.totalFileSize) { fileSize => | |
Fox.combined( | |
List( | |
runningUploadMetadataStore.insert(redisKeyForTotalFileSize(reserveUploadInfo.uploadId), | |
String.valueOf(fileSize)), | |
runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0") | |
)) | |
} |
…ableminds/webknossos into enforce-storage-quota-before-upload
organization: activeUser.organization, | ||
totalFileCount: formValues.zipFile.length, | ||
filePaths: filePaths, | ||
totalFileSizeInBytes, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @markbader,
here are the changes I mentioned which would need to be added to wklibs when uploading a dataset. Simply put: the initial reserve upload request now needs a new parameter called totalFileSizeInBytes
which is the sum of all files' sizes in bytes. The rest is backend work and no more changes are needed.
Old wklibs version should still work as this parameter is optional for now :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (2)
171-182
:⚠️ Potential issueRemove duplicate initialization of currentUploadedTotalFileSizeInBytes.
The
currentUploadedTotalFileSizeInBytes
is initialized twice:
- Inside the
Fox.runOptional(reserveUploadInfo.totalFileSizeInBytes)
block- Immediately after the block
Apply this diff to fix the issue:
_ <- Fox.runOptional(reserveUploadInfo.totalFileSizeInBytes) { fileSize => Fox.combined( List( runningUploadMetadataStore.insert(redisKeyForTotalFileSizeInBytes(reserveUploadInfo.uploadId), String.valueOf(fileSize)), runningUploadMetadataStore .insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(reserveUploadInfo.uploadId), "0") )) } - _ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(reserveUploadInfo.uploadId), - "0")
258-264
:⚠️ Potential issueFix potential race condition in size validation.
The current implementation has a race condition between checking the total size and updating it. Multiple concurrent uploads could each pass the check but collectively exceed the limit.
Based on the learnings from previous discussions, implement distributed locking to prevent race conditions:
-totalFileSizeInBytesOpt <- runningUploadMetadataStore.find(redisKeyForTotalFileSizeInBytes(uploadId)) -_ <- Fox.runOptional(totalFileSizeInBytesOpt) { maxFileSize => - runningUploadMetadataStore - .find(redisKeyForCurrentUploadedTotalFileSizeInBytes(uploadId)) - .flatMap(alreadyUploadedAmountOfBytes => - bool2Fox(alreadyUploadedAmountOfBytes.getOrElse("0").toLong + chunkSize <= maxFileSize.toLong)) -} ?~> "dataset.upload.moreBytesThanReserved" +_ <- runningUploadMetadataStore.withLock(s"upload_size_check_$uploadId") { + for { + totalFileSizeInBytesOpt <- runningUploadMetadataStore.find(redisKeyForTotalFileSizeInBytes(uploadId)) + _ <- Fox.runOptional(totalFileSizeInBytesOpt) { maxFileSize => + for { + currentSize <- runningUploadMetadataStore.find(redisKeyForCurrentUploadedTotalFileSizeInBytes(uploadId)) + newSize = currentSize.getOrElse("0").toLong + chunkSize + _ <- bool2Fox(newSize <= maxFileSize.toLong) ?~> "dataset.upload.moreBytesThanReserved" + _ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(uploadId), + String.valueOf(newSize)) + } yield () + } + } yield () +}Also add a method to DataStoreRedisStore:
def withLock[T](key: String)(f: => Fox[T]): Fox[T] = { // Implement Redis-based distributed locking // Consider using Redisson or other battle-tested implementations }
🧹 Nitpick comments (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (1)
322-329
: Improve error message for size validation.The error message could be more descriptive to help users understand the issue better.
Apply this diff to improve the error message:
- Fox.failure(s"Uploaded dataset exceeds the maximum allowed size of $maxFileSize bytes") + Fox.failure(s"Uploaded dataset size ($actualFileSize bytes) exceeds the maximum allowed size of $maxFileSize bytes")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala
(6 hunks)
🧰 Additional context used
🧠 Learnings (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (1)
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8385
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala:256-262
Timestamp: 2025-02-07T09:22:54.409Z
Learning: The size validation in UploadService should use distributed locking to prevent race conditions between checking the total size and updating it, as multiple concurrent uploads could collectively exceed the limit.
🔇 Additional comments (3)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (3)
32-44
: LGTM!The addition of
totalFileSizeInBytes
field toReserveUploadInformation
case class is well-designed:
- Uses
Option[Long]
for backward compatibility- Automatically handled by the JSON format
129-132
: LGTM!The new Redis key methods follow the established naming pattern and are well-structured.
661-662
: LGTM!The cleanup of the new Redis keys is thorough and consistent with the existing pattern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking pretty good! First test worked for me.
Concerning the hidden error message: Do you think the frontend could do a check first, before the reserve request is even sent? (Maybe even before the upload button is hit?) In theory, the frontend should be able to determine if the selected files would be too much, right? Since it knows both limit and existing amount in the organization?
Did you also test uploading with the libs? If not i could probably also do that.
.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(reserveUploadInfo.uploadId), "0") | ||
)) | ||
} | ||
_ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(reserveUploadInfo.uploadId), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn’t you just insert something at this key in the line above, inside of the runOptional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh woopsie 🦓 🙈
I removed this line :)
...-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In theory, the frontend should be able to determine if the selected files would be too much, right? Since it knows both limit and existing amount in the organization?
Sure, thats a good suggestion. I missed that. It should now be added to the changes of this pr.
Edit: The error text adjusts in case the user is not an admin and tells to contact an admin to increase the storage.
Did you also test uploading with the libs? If not i could probably also do that.
Now yes: also a very good suggestion :D
=> For testing I used v0.16.9
and it worked without any problems.
.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(reserveUploadInfo.uploadId), "0") | ||
)) | ||
} | ||
_ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(reserveUploadInfo.uploadId), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh woopsie 🦓 🙈
I removed this line :)
...-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (1)
256-262
:⚠️ Potential issueFix race condition in size validation.
The current implementation has a race condition between checking the total size and updating it. Multiple concurrent uploads could each pass the check but collectively exceed the limit.
Apply this diff to implement distributed locking:
-totalFileSizeInBytesOpt <- runningUploadMetadataStore.find(redisKeyForTotalFileSizeInBytes(uploadId)) -_ <- Fox.runOptional(totalFileSizeInBytesOpt) { maxFileSize => - runningUploadMetadataStore - .find(redisKeyForCurrentUploadedTotalFileSizeInBytes(uploadId)) - .flatMap(alreadyUploadedAmountOfBytes => - bool2Fox(alreadyUploadedAmountOfBytes.getOrElse("0").toLong + chunkSize <= maxFileSize.toLong)) -} ?~> "dataset.upload.moreBytesThanReserved" +_ <- runningUploadMetadataStore.withLock(s"upload_size_check_$uploadId") { + for { + totalFileSizeInBytesOpt <- runningUploadMetadataStore.find(redisKeyForTotalFileSizeInBytes(uploadId)) + _ <- Fox.runOptional(totalFileSizeInBytesOpt) { maxFileSize => + for { + currentSize <- runningUploadMetadataStore.find(redisKeyForCurrentUploadedTotalFileSizeInBytes(uploadId)) + newSize = currentSize.getOrElse("0").toLong + chunkSize + _ <- bool2Fox(newSize <= maxFileSize.toLong) ?~> "dataset.upload.moreBytesThanReserved" + _ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(uploadId), + String.valueOf(newSize)) + } yield () + } + } yield () +}Also add a method to DataStoreRedisStore:
def withLock[T](key: String)(f: => Fox[T]): Fox[T] = { // Implement Redis-based distributed locking // Consider using Redisson or other battle-tested implementations }
🧹 Nitpick comments (1)
frontend/javascripts/admin/organization/pricing_plan_utils.ts (1)
50-52
: Consider handling negative storage values.While the current implementation is correct, it might be worth considering edge cases where
usedStorageBytes
could exceedincludedStorageBytes
, resulting in negative values.Consider adding a safety check:
export function getLeftOverStorageBytes(organization: APIOrganization): number { - return organization.includedStorageBytes - organization.usedStorageBytes; + const leftOver = organization.includedStorageBytes - organization.usedStorageBytes; + return Math.max(0, leftOver); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
CHANGELOG.unreleased.md
(1 hunks)app/controllers/WKRemoteDataStoreController.scala
(1 hunks)conf/messages
(1 hunks)frontend/javascripts/admin/dataset/dataset_upload_view.tsx
(6 hunks)frontend/javascripts/admin/organization/pricing_plan_utils.ts
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala
(6 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- conf/messages
- CHANGELOG.unreleased.md
- app/controllers/WKRemoteDataStoreController.scala
- webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala
- frontend/javascripts/admin/dataset/dataset_upload_view.tsx
🧰 Additional context used
🧠 Learnings (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (1)
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8385
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala:256-262
Timestamp: 2025-02-07T09:22:54.409Z
Learning: The size validation in UploadService should use distributed locking to prevent race conditions between checking the total size and updating it, as multiple concurrent uploads could collectively exceed the limit.
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: circleci_build
🔇 Additional comments (4)
frontend/javascripts/admin/organization/pricing_plan_utils.ts (1)
50-52
: LGTM! The new function is well-implemented.The function provides a clean way to calculate remaining storage bytes, which aligns well with the PR's objective of enforcing storage quotas before upload.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (3)
32-41
: LGTM! The case class changes look good.The addition of
totalFileSizeInBytes
as an optional Long field is well-designed, maintaining backward compatibility while enabling storage quota enforcement.
129-132
: LGTM! Redis key management is consistent.The new Redis key methods follow the established naming pattern and are properly encapsulated.
659-660
: LGTM! Cleanup handling is complete.The cleanup properly removes all Redis keys related to file size tracking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! What happens exactly, when an uploadChunk request fails? Will the frontend retry it or abort? Should we add a cleanup functionality on the server in case of abort?
Also, since the frontend changes are growing, maybe it would be a good idea to ask another frontend dev to have a look.
...-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala
Show resolved
Hide resolved
unfinishedUploadsWithoutIds: List[UnfinishedUpload]): Fox[List[UnfinishedUpload]] = | ||
for { | ||
maybeUnfinishedUploads: List[Box[UnfinishedUpload]] <- Fox.sequence( | ||
maybeUnfinishedUploads: List[Box[Option[UnfinishedUpload]]] <- Fox.sequence( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the maybe encoding here? Could the listed uploads actually be finished?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
During working on this PR and checking whether the auto clean-up upon rejection (due to reserved storage overflow / too much storage after upload and in the finish upload route) I noticed that the rejected uploads were listed in the the fronted as continuable. But this is undesired in this case. This had multiple reasons:
- Not all redis entries for the upload were clean up upon rejection
- The postgres entry for the not finished uploaded dataset still existed
So to tackle this I tired to solve both problems by, 1. delete all redis keys, 2. delete the dataset from postgres and 3. to only send unfinished uploads which are present in the redis db.
Do you think 3rd is a little too much?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I think that makes sense. The ones not in redis can’t be properly continued anyway if I’m not mistaken
...-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala
Outdated
Show resolved
Hide resolved
...-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala
Outdated
Show resolved
Hide resolved
} | ||
)) | ||
foundUnfinishedUploads = maybeUnfinishedUploads.flatten | ||
foundUnfinishedUploads = maybeUnfinishedUploads.flatten.flatten |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’d say it’s ok. The function is a little hard to read, so could you please add a comment on what it’s for, and that it does indeed filter the uploads no longer fully present in redis?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Frontend code LGTM, did not test 👍
? unfinishedUploadToContinue.uploadId | ||
: `${dayjs(Date.now()).format("YYYY-MM-DD_HH-mm")}__${newDatasetName}__${getRandomString()}`; | ||
const filePaths = formValues.zipFile.map((file) => file.path || ""); | ||
const totalFileSizeInBytes = getFileSize(formValues.zipFile); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to make sure, this file size measurement is not used for the quota computation, but is used by the server for something else, right? Otherwise, the unzipped file size should be measured.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The file size is used to early reject far too large uploads. I discussed this with @fm3. In case the compressed version already exceeds the left over storage space, the upload is rejected. Else if it exceeds after the decompression on the server side, we see this as ok. Otherwise, the frontend would need to decompress the files itself before upload making the upload workflow a little less streamlined and maybe even needing pretty long time as it is a decompression done in the browser.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Else if it exceeds after the decompression on the server side, we see this as ok.
Does this mean users can still exceed their storage quota if the umcompressed data is (much) larger than the uploaded zip file?
Otherwise, the frontend would need to decompress the files itself before upload making the upload workflow a little less streamlined and maybe even needing pretty long time as it is a decompression done in the browser.
I think the uncompressed size of the zip entries is available via metadata, see https://gildas-lormeau.github.io/zip.js/api/interfaces/Entry.html#uncompressedsize
If I read the code correctly we already iterate over all file entries (reader.getEntries()
) and could access the uncompressed size there as well.
Feel free to ignore if this doesn't have priority. I just wanted to note :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean users can still exceed their storage quota if the umcompressed data is (much) larger than the uploaded zip file?
Yes, that's what we agreed upon 👍
I think the uncompressed size of the zip entries is available via metadata, see https://gildas-lormeau.github.io/zip.js/api/interfaces/Entry.html#uncompressedsize
If I read the code correctly we already iterate over all file entries (reader.getEntries()) and could access the uncompressed size there as well.Feel free to ignore if this doesn't have priority. I just wanted to note :)
@fm3 Do you think I should do this in this PR? I personally would prefer to make this a follow-up issue. The PR is already open quite a long time and this should be easy to do as a follow-up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok let’s go for follow-up issue
Co-authored-by: Daniel <[email protected]>
…orce-storage-quota-before-upload
TODO
|
…orce-storage-quota-before-upload
Yep, looks good. I commented out the deletion of the db entry so it is kept but left the removal of the redis entries. => The failed upload did not show in the list of uploads to continue. But when I also commented out the removal of the redis entries, it showed up again :) I triggered the "failed upload" in this scenario by sending a too small hard coded |
Sooo I think this can go for another iteration ♻️ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just added two suggestions on wording
...-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala
Outdated
Show resolved
Hide resolved
…orce-storage-quota-before-upload
This reverts commit 3caa167.
Co-authored-by: Florian M <[email protected]>
…t exceed reserved size for upload
tuple( | ||
"resumableChunkNumber" -> number, | ||
"resumableChunkSize" -> number, | ||
"resumableCurrentChunkSize" -> number, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fm3 this should be fine to add as a param as it is already sent by previous wklibs versions, correct @markbader?
Mroeover, now that I added this, your wklibs pr should hopefully be working now 👍.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, should be fine. The old libs do send this already.
If someone lies about this number, the chunk request may wrongly pass, but finishUpload should ultimately be rejected, which seems fine to me.
Currently, the storage quota enforcement does not consider the size of the current upload. With these changes the upload sends it upload size so the check can be done including the size of the dataset that should be uploaded. The backend auto rejects in case the total sum overflows the allowed storage quota. Additionally, the backend automatically rejects chucks which would overflow the reserved amount of bytes for the upload.
URL of deployed dev instance (used for testing):
Steps to test:
dataset_upload_view.tsx
and edit line 319 to force it to be less bytes than the dataset you now want to upload. Then save and reload the frontend.dataset_upload_view.tsx
file changesapplication.conf
setreportUsedStorage.enabled = true
Team
or so (not sure whether that necessary)TODOs:
totalFileSize
tototalFileSizeInBytes
Do you have an improvement idea @fm3?
Issues:
(Please delete unneeded items, merge only when none are left open)