Skip to content

Conversation

MichaelBuessemeyer
Copy link
Contributor

@MichaelBuessemeyer MichaelBuessemeyer commented Feb 6, 2025

Currently, the storage quota enforcement does not consider the size of the current upload. With these changes the upload sends it upload size so the check can be done including the size of the dataset that should be uploaded. The backend auto rejects in case the total sum overflows the allowed storage quota. Additionally, the backend automatically rejects chucks which would overflow the reserved amount of bytes for the upload.

URL of deployed dev instance (used for testing):

  • https://___.webknossos.xyz

Steps to test:

  • I suggest testing locally
  • Just upload a dataset via the frontend. Everything should be working nicely.
  • Open the frontend file dataset_upload_view.tsx and edit line 319 to force it to be less bytes than the dataset you now want to upload. Then save and reload the frontend.
  • Try uploading the dataset, during upload the request should fail as the frontend tries to upload more bytes than originally reserved with the request
  • Undo the dataset_upload_view.tsx file changes
  • Open the database and restrict the allowed storage amount of the default orga such that the next dataset you want to upload should exceed the limit.
    • in application.conf set
    fetchUsedStorage {
      rescanInterval = 30 seconds # do not scan organizations whose last scan is more recent than this
      tickerInterval = 10 seconds # scan some organizations at each tick
      scansPerTick = 10 # scan x organizations at each tick
    }
    
    • and reportUsedStorage.enabled = true
    • And set organization plan to Team or so (not sure whether that necessary)
  • Now try to upload the dataset. The request should also fail.

TODOs:

  • rename totalFileSize to totalFileSizeInBytes
  • Clean up redis keys upon successful upload
  • When the reserve upload is rejected because the new dataset would exceed the orgas storage quota, the error sent by the backend to the frontend is nested and not so clear:
    image
    Do you have an improvement idea @fm3?
  • Upon finishing upload check again before extracting the potential existing archive whether the uploaded storage exceeds the reserved amount.
  • When the upload is rejected, delete the uploaded data to save disk space

Issues:


(Please delete unneeded items, merge only when none are left open)

  • Updated changelog
  • Needs datastore update after deployment

Copy link
Contributor

coderabbitai bot commented Feb 6, 2025

📝 Walkthrough

Walkthrough

This pull request enhances the dataset upload process by incorporating the total file size into storage quota validations. The frontend calculates the total file size of the files to be uploaded and passes this information to the backend. Several backend components, including the reserve and upload methods across multiple controllers and services, have been updated to consider the incoming upload size when validating available storage. Additionally, a new error message is introduced to handle cases where the upload exceeds the reserved space.

Changes

File(s) Change Summary
app/.../WKRemoteDataStoreController.scala Updated reserveDatasetUpload to check if usedStorageBytes + totalFileSize is within the storage quota.
webknossos-datastore/.../DataSourceController.scala Modified reserveManualUpload and uploadChunk methods to include an optional totalFileSize parameter for improved upload reservation logic.
frontend/javascripts/.../dataset_upload_view.tsx Added a new variable totalFileSizeInBytes in handleSubmit to send total file size information to the backend.
webknossos-datastore/.../ComposeService.scala
webknossos-datastore/.../UploadService.scala
Updated ReserveUploadInformation to include a totalFileSizeInBytes field; in UploadService, added Redis key methods and logic for managing total file size.
conf/messages Introduced a new error message (dataset.upload.moreBytesThanReserved) for notifying users when the uploaded dataset exceeds the reserved size.
frontend/javascripts/.../organization/pricing_plan_utils.ts Added getLeftOverStorageBytes function to calculate remaining storage for an organization.
CHANGELOG.unreleased.md Updated to include a new feature that enforces maximum available storage for organizations during uploads.

Assessment against linked issues

Objective Addressed Explanation
Enforce Storage Quota already before upload (#7962)
Determine the size of the to-uploaded files in the frontend (#7962)
Check if the total file size exceeds the quota before reserving (#7962)

Possibly related PRs

Suggested labels

bug

Suggested reviewers

  • fm3

Poem

I'm a rabbit with a twirl and a hop,
Guiding uploads from bottom to top.
Bytes add up with a skip and a cheer,
Ensuring quota is always clear.
With each change, the process is tight—
Hoppy code makes everything right!
🐇🌟


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6f7b6a8 and 0238041.

📒 Files selected for processing (1)
  • CHANGELOG.unreleased.md (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • CHANGELOG.unreleased.md
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: circleci_build

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@MichaelBuessemeyer MichaelBuessemeyer marked this pull request as ready for review February 6, 2025 15:27
@MichaelBuessemeyer
Copy link
Contributor Author

@fm3 Would you please take a look at this PR? The changes in the frontend are very minimal right now. So it should be ok if no frontend dev takes a look at it imo.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5447c6e and 43a60e1.

📒 Files selected for processing (6)
  • app/controllers/WKRemoteDataStoreController.scala (1 hunks)
  • conf/messages (1 hunks)
  • frontend/javascripts/admin/dataset/dataset_upload_view.tsx (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (3 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/ComposeService.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (5 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: circleci_build
🔇 Additional comments (9)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/ComposeService.scala (1)

64-64: LGTM!

The addition of None for the totalFileSize parameter is correct, as this service is used for composing datasets rather than direct uploads.

conf/messages (1)

113-113: LGTM!

The error message is clear and provides helpful guidance to users when they exceed their reserved upload size.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (3)

38-38: LGTM!

The addition of totalFileSize field to ReserveUploadInformation is appropriate.


129-132: LGTM!

The Redis key definitions for tracking total and current upload sizes follow the existing naming convention.


651-652: LGTM!

The cleanup of Redis keys in removeFromRedis is complete and correct.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (2)

125-125: LGTM! Added totalFileSize parameter for storage quota validation.

The addition of the totalFileSize parameter in the ReserveUploadInformation constructor aligns with the PR's objective to enforce storage quota before upload.


417-417: LGTM! Added totalFileSize parameter for storage quota validation.

The addition of the totalFileSize parameter in the ReserveUploadInformation constructor is consistent with the changes in the manual upload method.

frontend/javascripts/admin/dataset/dataset_upload_view.tsx (2)

319-319: LGTM! Correctly calculates total file size.

The implementation uses reduce to calculate the total size of all files, which is an efficient approach.


328-328: LGTM! Added totalFileSize to upload information.

The calculated total file size is correctly included in the reserveUploadInformation object sent to the backend.

usedStorageBytes <- organizationDAO.getUsedStorage(organization._id)
_ <- Fox.runOptional(organization.includedStorageBytes)(includedStorage =>
bool2Fox(usedStorageBytes <= includedStorage)) ?~> "dataset.upload.storageExceeded" ~> FORBIDDEN
bool2Fox(usedStorageBytes + uploadInfo.totalFileSize.getOrElse(0L) <= includedStorage)) ?~> "dataset.upload.storageExceeded" ~> FORBIDDEN
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider requiring totalFileSize for uploads.

The current implementation uses getOrElse(0L) which could silently pass the quota check if totalFileSize is None. Consider making totalFileSize a required field for uploads to prevent accidentally bypassing the quota check.

-bool2Fox(usedStorageBytes + uploadInfo.totalFileSize.getOrElse(0L) <= includedStorage)
+uploadInfo.totalFileSize match {
+  case Some(size) => bool2Fox(usedStorageBytes + size <= includedStorage)
+  case None => Fox.failure("dataset.upload.totalFileSizeRequired")
+}

Also add this message to conf/messages:

dataset.upload.totalFileSizeRequired=Total file size must be provided for uploads to enforce storage quota.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
bool2Fox(usedStorageBytes + uploadInfo.totalFileSize.getOrElse(0L) <= includedStorage)) ?~> "dataset.upload.storageExceeded" ~> FORBIDDEN
uploadInfo.totalFileSize match {
case Some(size) => bool2Fox(usedStorageBytes + size <= includedStorage)
case None => Fox.failure("dataset.upload.totalFileSizeRequired")
} ?~> "dataset.upload.storageExceeded" ~> FORBIDDEN

Comment on lines 256 to 262
totalFileSize <- runningUploadMetadataStore.find(redisKeyForTotalFileSize(uploadId))
_ <- Fox.runOptional(totalFileSize) { maxFileSize =>
runningUploadMetadataStore
.find(redisKeyForCurrentUploadedTotalFileSize(uploadId))
.flatMap(alreadyUploadedAmountOfBytes =>
bool2Fox(alreadyUploadedAmountOfBytes.getOrElse("0").toLong + chunkSize <= maxFileSize.toLong))
} ?~> "dataset.upload.moreBytesThanReserved"
Copy link
Contributor

@coderabbitai coderabbitai bot Feb 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix potential race condition in size validation.

The current implementation has a race condition between checking the total size and updating it. Multiple concurrent uploads could each pass the check but collectively exceed the limit.

-totalFileSize <- runningUploadMetadataStore.find(redisKeyForTotalFileSize(uploadId))
-_ <- Fox.runOptional(totalFileSize) { maxFileSize =>
-  runningUploadMetadataStore
-    .find(redisKeyForCurrentUploadedTotalFileSize(uploadId))
-    .flatMap(alreadyUploadedAmountOfBytes =>
-      bool2Fox(alreadyUploadedAmountOfBytes.getOrElse("0").toLong + chunkSize <= maxFileSize.toLong))
-} ?~> "dataset.upload.moreBytesThanReserved"
+_ <- runningUploadMetadataStore.withLock(s"upload_size_check_$uploadId") {
+  for {
+    totalFileSize <- runningUploadMetadataStore.find(redisKeyForTotalFileSize(uploadId))
+    _ <- Fox.runOptional(totalFileSize) { maxFileSize =>
+      for {
+        currentSize <- runningUploadMetadataStore.find(redisKeyForCurrentUploadedTotalFileSize(uploadId))
+        newSize = currentSize.getOrElse("0").toLong + chunkSize
+        _ <- bool2Fox(newSize <= maxFileSize.toLong) ?~> "dataset.upload.moreBytesThanReserved"
+        _ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(uploadId), 
+               String.valueOf(newSize))
+      } yield ()
+    }
+  } yield ()
+}

Also add a method to DataStoreRedisStore:

def withLock[T](key: String)(f: => Fox[T]): Fox[T] = {
  // Implement Redis-based distributed locking
  // Consider using Redisson or other battle-tested implementations
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that might actually be a thing. Will take a look at it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MichaelBuessemeyer Thanks for considering this feedback! Let me know if you need any help with implementing the distributed locking solution.

(_/)

(^.^)

(> <) Thanks for the feedback!


✏️ Learnings added
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8385
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala:256-262
Timestamp: 2025-02-07T09:22:54.315Z
Learning: The size validation in UploadService should use distributed locking to prevent race conditions between checking the total size and updating it, as multiple concurrent uploads could collectively exceed the limit.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Comment on lines 171 to 179
_ <- Fox.runOptional(reserveUploadInfo.totalFileSize) { fileSize =>
Fox.combined(
List(
runningUploadMetadataStore.insert(redisKeyForTotalFileSize(reserveUploadInfo.uploadId),
String.valueOf(fileSize)),
runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0")
))
}
_ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix duplicate initialization of currentUploadedTotalFileSize.

The currentUploadedTotalFileSize is initialized twice:

  1. Inside the Fox.runOptional(reserveUploadInfo.totalFileSize) block
  2. Immediately after the block
 _ <- Fox.runOptional(reserveUploadInfo.totalFileSize) { fileSize =>
   Fox.combined(
     List(
       runningUploadMetadataStore.insert(redisKeyForTotalFileSize(reserveUploadInfo.uploadId),
                                         String.valueOf(fileSize)),
       runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0")
     ))
 }
-_ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
_ <- Fox.runOptional(reserveUploadInfo.totalFileSize) { fileSize =>
Fox.combined(
List(
runningUploadMetadataStore.insert(redisKeyForTotalFileSize(reserveUploadInfo.uploadId),
String.valueOf(fileSize)),
runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0")
))
}
_ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0")
_ <- Fox.runOptional(reserveUploadInfo.totalFileSize) { fileSize =>
Fox.combined(
List(
runningUploadMetadataStore.insert(redisKeyForTotalFileSize(reserveUploadInfo.uploadId),
String.valueOf(fileSize)),
runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSize(reserveUploadInfo.uploadId), "0")
))
}

Michael Büßemeyer added 2 commits February 7, 2025 10:38
…ableminds/webknossos into enforce-storage-quota-before-upload
organization: activeUser.organization,
totalFileCount: formValues.zipFile.length,
filePaths: filePaths,
totalFileSizeInBytes,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @markbader,

here are the changes I mentioned which would need to be added to wklibs when uploading a dataset. Simply put: the initial reserve upload request now needs a new parameter called totalFileSizeInBytes which is the sum of all files' sizes in bytes. The rest is backend work and no more changes are needed.

Old wklibs version should still work as this parameter is optional for now :)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (2)

171-182: ⚠️ Potential issue

Remove duplicate initialization of currentUploadedTotalFileSizeInBytes.

The currentUploadedTotalFileSizeInBytes is initialized twice:

  1. Inside the Fox.runOptional(reserveUploadInfo.totalFileSizeInBytes) block
  2. Immediately after the block

Apply this diff to fix the issue:

  _ <- Fox.runOptional(reserveUploadInfo.totalFileSizeInBytes) { fileSize =>
    Fox.combined(
      List(
        runningUploadMetadataStore.insert(redisKeyForTotalFileSizeInBytes(reserveUploadInfo.uploadId),
                                          String.valueOf(fileSize)),
        runningUploadMetadataStore
          .insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(reserveUploadInfo.uploadId), "0")
      ))
  }
- _ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(reserveUploadInfo.uploadId),
-                                         "0")

258-264: ⚠️ Potential issue

Fix potential race condition in size validation.

The current implementation has a race condition between checking the total size and updating it. Multiple concurrent uploads could each pass the check but collectively exceed the limit.

Based on the learnings from previous discussions, implement distributed locking to prevent race conditions:

-totalFileSizeInBytesOpt <- runningUploadMetadataStore.find(redisKeyForTotalFileSizeInBytes(uploadId))
-_ <- Fox.runOptional(totalFileSizeInBytesOpt) { maxFileSize =>
-  runningUploadMetadataStore
-    .find(redisKeyForCurrentUploadedTotalFileSizeInBytes(uploadId))
-    .flatMap(alreadyUploadedAmountOfBytes =>
-      bool2Fox(alreadyUploadedAmountOfBytes.getOrElse("0").toLong + chunkSize <= maxFileSize.toLong))
-} ?~> "dataset.upload.moreBytesThanReserved"
+_ <- runningUploadMetadataStore.withLock(s"upload_size_check_$uploadId") {
+  for {
+    totalFileSizeInBytesOpt <- runningUploadMetadataStore.find(redisKeyForTotalFileSizeInBytes(uploadId))
+    _ <- Fox.runOptional(totalFileSizeInBytesOpt) { maxFileSize =>
+      for {
+        currentSize <- runningUploadMetadataStore.find(redisKeyForCurrentUploadedTotalFileSizeInBytes(uploadId))
+        newSize = currentSize.getOrElse("0").toLong + chunkSize
+        _ <- bool2Fox(newSize <= maxFileSize.toLong) ?~> "dataset.upload.moreBytesThanReserved"
+        _ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(uploadId), 
+               String.valueOf(newSize))
+      } yield ()
+    }
+  } yield ()
+}

Also add a method to DataStoreRedisStore:

def withLock[T](key: String)(f: => Fox[T]): Fox[T] = {
  // Implement Redis-based distributed locking
  // Consider using Redisson or other battle-tested implementations
}
🧹 Nitpick comments (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (1)

322-329: Improve error message for size validation.

The error message could be more descriptive to help users understand the issue better.

Apply this diff to improve the error message:

-            Fox.failure(s"Uploaded dataset exceeds the maximum allowed size of $maxFileSize bytes")
+            Fox.failure(s"Uploaded dataset size ($actualFileSize bytes) exceeds the maximum allowed size of $maxFileSize bytes")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aeb062c and 6113e0a.

📒 Files selected for processing (1)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (6 hunks)
🧰 Additional context used
🧠 Learnings (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (1)
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8385
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala:256-262
Timestamp: 2025-02-07T09:22:54.409Z
Learning: The size validation in UploadService should use distributed locking to prevent race conditions between checking the total size and updating it, as multiple concurrent uploads could collectively exceed the limit.
🔇 Additional comments (3)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (3)

32-44: LGTM!

The addition of totalFileSizeInBytes field to ReserveUploadInformation case class is well-designed:

  • Uses Option[Long] for backward compatibility
  • Automatically handled by the JSON format

129-132: LGTM!

The new Redis key methods follow the established naming pattern and are well-structured.


661-662: LGTM!

The cleanup of the new Redis keys is thorough and consistent with the existing pattern.

Copy link
Member

@fm3 fm3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking pretty good! First test worked for me.

Concerning the hidden error message: Do you think the frontend could do a check first, before the reserve request is even sent? (Maybe even before the upload button is hit?) In theory, the frontend should be able to determine if the selected files would be too much, right? Since it knows both limit and existing amount in the organization?

Did you also test uploading with the libs? If not i could probably also do that.

.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(reserveUploadInfo.uploadId), "0")
))
}
_ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(reserveUploadInfo.uploadId),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

didn’t you just insert something at this key in the line above, inside of the runOptional?

Copy link
Contributor Author

@MichaelBuessemeyer MichaelBuessemeyer Feb 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh woopsie 🦓 🙈
I removed this line :)

Copy link
Contributor Author

@MichaelBuessemeyer MichaelBuessemeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In theory, the frontend should be able to determine if the selected files would be too much, right? Since it knows both limit and existing amount in the organization?

Sure, thats a good suggestion. I missed that. It should now be added to the changes of this pr.
image

Edit: The error text adjusts in case the user is not an admin and tells to contact an admin to increase the storage.

Did you also test uploading with the libs? If not i could probably also do that.

Now yes: also a very good suggestion :D
=> For testing I used v0.16.9 and it worked without any problems.

.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(reserveUploadInfo.uploadId), "0")
))
}
_ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(reserveUploadInfo.uploadId),
Copy link
Contributor Author

@MichaelBuessemeyer MichaelBuessemeyer Feb 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh woopsie 🦓 🙈
I removed this line :)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (1)

256-262: ⚠️ Potential issue

Fix race condition in size validation.

The current implementation has a race condition between checking the total size and updating it. Multiple concurrent uploads could each pass the check but collectively exceed the limit.

Apply this diff to implement distributed locking:

-totalFileSizeInBytesOpt <- runningUploadMetadataStore.find(redisKeyForTotalFileSizeInBytes(uploadId))
-_ <- Fox.runOptional(totalFileSizeInBytesOpt) { maxFileSize =>
-  runningUploadMetadataStore
-    .find(redisKeyForCurrentUploadedTotalFileSizeInBytes(uploadId))
-    .flatMap(alreadyUploadedAmountOfBytes =>
-      bool2Fox(alreadyUploadedAmountOfBytes.getOrElse("0").toLong + chunkSize <= maxFileSize.toLong))
-} ?~> "dataset.upload.moreBytesThanReserved"
+_ <- runningUploadMetadataStore.withLock(s"upload_size_check_$uploadId") {
+  for {
+    totalFileSizeInBytesOpt <- runningUploadMetadataStore.find(redisKeyForTotalFileSizeInBytes(uploadId))
+    _ <- Fox.runOptional(totalFileSizeInBytesOpt) { maxFileSize =>
+      for {
+        currentSize <- runningUploadMetadataStore.find(redisKeyForCurrentUploadedTotalFileSizeInBytes(uploadId))
+        newSize = currentSize.getOrElse("0").toLong + chunkSize
+        _ <- bool2Fox(newSize <= maxFileSize.toLong) ?~> "dataset.upload.moreBytesThanReserved"
+        _ <- runningUploadMetadataStore.insert(redisKeyForCurrentUploadedTotalFileSizeInBytes(uploadId), 
+               String.valueOf(newSize))
+      } yield ()
+    }
+  } yield ()
+}

Also add a method to DataStoreRedisStore:

def withLock[T](key: String)(f: => Fox[T]): Fox[T] = {
  // Implement Redis-based distributed locking
  // Consider using Redisson or other battle-tested implementations
}
🧹 Nitpick comments (1)
frontend/javascripts/admin/organization/pricing_plan_utils.ts (1)

50-52: Consider handling negative storage values.

While the current implementation is correct, it might be worth considering edge cases where usedStorageBytes could exceed includedStorageBytes, resulting in negative values.

Consider adding a safety check:

 export function getLeftOverStorageBytes(organization: APIOrganization): number {
-  return organization.includedStorageBytes - organization.usedStorageBytes;
+  const leftOver = organization.includedStorageBytes - organization.usedStorageBytes;
+  return Math.max(0, leftOver);
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6113e0a and 5a5e092.

📒 Files selected for processing (7)
  • CHANGELOG.unreleased.md (1 hunks)
  • app/controllers/WKRemoteDataStoreController.scala (1 hunks)
  • conf/messages (1 hunks)
  • frontend/javascripts/admin/dataset/dataset_upload_view.tsx (6 hunks)
  • frontend/javascripts/admin/organization/pricing_plan_utils.ts (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (6 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • conf/messages
  • CHANGELOG.unreleased.md
  • app/controllers/WKRemoteDataStoreController.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala
  • frontend/javascripts/admin/dataset/dataset_upload_view.tsx
🧰 Additional context used
🧠 Learnings (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (1)
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8385
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala:256-262
Timestamp: 2025-02-07T09:22:54.409Z
Learning: The size validation in UploadService should use distributed locking to prevent race conditions between checking the total size and updating it, as multiple concurrent uploads could collectively exceed the limit.
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: circleci_build
🔇 Additional comments (4)
frontend/javascripts/admin/organization/pricing_plan_utils.ts (1)

50-52: LGTM! The new function is well-implemented.

The function provides a clean way to calculate remaining storage bytes, which aligns well with the PR's objective of enforcing storage quotas before upload.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/uploading/UploadService.scala (3)

32-41: LGTM! The case class changes look good.

The addition of totalFileSizeInBytes as an optional Long field is well-designed, maintaining backward compatibility while enabling storage quota enforcement.


129-132: LGTM! Redis key management is consistent.

The new Redis key methods follow the established naming pattern and are properly encapsulated.


659-660: LGTM! Cleanup handling is complete.

The cleanup properly removes all Redis keys related to file size tracking.

Copy link
Member

@fm3 fm3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good! What happens exactly, when an uploadChunk request fails? Will the frontend retry it or abort? Should we add a cleanup functionality on the server in case of abort?

Also, since the frontend changes are growing, maybe it would be a good idea to ask another frontend dev to have a look.

unfinishedUploadsWithoutIds: List[UnfinishedUpload]): Fox[List[UnfinishedUpload]] =
for {
maybeUnfinishedUploads: List[Box[UnfinishedUpload]] <- Fox.sequence(
maybeUnfinishedUploads: List[Box[Option[UnfinishedUpload]]] <- Fox.sequence(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the maybe encoding here? Could the listed uploads actually be finished?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

During working on this PR and checking whether the auto clean-up upon rejection (due to reserved storage overflow / too much storage after upload and in the finish upload route) I noticed that the rejected uploads were listed in the the fronted as continuable. But this is undesired in this case. This had multiple reasons:

  1. Not all redis entries for the upload were clean up upon rejection
  2. The postgres entry for the not finished uploaded dataset still existed

So to tackle this I tired to solve both problems by, 1. delete all redis keys, 2. delete the dataset from postgres and 3. to only send unfinished uploads which are present in the redis db.

Do you think 3rd is a little too much?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I think that makes sense. The ones not in redis can’t be properly continued anyway if I’m not mistaken

}
))
foundUnfinishedUploads = maybeUnfinishedUploads.flatten
foundUnfinishedUploads = maybeUnfinishedUploads.flatten.flatten
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’d say it’s ok. The function is a little hard to read, so could you please add a comment on what it’s for, and that it does indeed filter the uploads no longer fully present in redis?

Copy link
Member

@daniel-wer daniel-wer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Frontend code LGTM, did not test 👍

? unfinishedUploadToContinue.uploadId
: `${dayjs(Date.now()).format("YYYY-MM-DD_HH-mm")}__${newDatasetName}__${getRandomString()}`;
const filePaths = formValues.zipFile.map((file) => file.path || "");
const totalFileSizeInBytes = getFileSize(formValues.zipFile);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to make sure, this file size measurement is not used for the quota computation, but is used by the server for something else, right? Otherwise, the unzipped file size should be measured.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The file size is used to early reject far too large uploads. I discussed this with @fm3. In case the compressed version already exceeds the left over storage space, the upload is rejected. Else if it exceeds after the decompression on the server side, we see this as ok. Otherwise, the frontend would need to decompress the files itself before upload making the upload workflow a little less streamlined and maybe even needing pretty long time as it is a decompression done in the browser.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Else if it exceeds after the decompression on the server side, we see this as ok.

Does this mean users can still exceed their storage quota if the umcompressed data is (much) larger than the uploaded zip file?

Otherwise, the frontend would need to decompress the files itself before upload making the upload workflow a little less streamlined and maybe even needing pretty long time as it is a decompression done in the browser.

I think the uncompressed size of the zip entries is available via metadata, see https://gildas-lormeau.github.io/zip.js/api/interfaces/Entry.html#uncompressedsize
If I read the code correctly we already iterate over all file entries (reader.getEntries()) and could access the uncompressed size there as well.

Feel free to ignore if this doesn't have priority. I just wanted to note :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean users can still exceed their storage quota if the umcompressed data is (much) larger than the uploaded zip file?

Yes, that's what we agreed upon 👍

I think the uncompressed size of the zip entries is available via metadata, see https://gildas-lormeau.github.io/zip.js/api/interfaces/Entry.html#uncompressedsize
If I read the code correctly we already iterate over all file entries (reader.getEntries()) and could access the uncompressed size there as well.

Feel free to ignore if this doesn't have priority. I just wanted to note :)

@fm3 Do you think I should do this in this PR? I personally would prefer to make this a follow-up issue. The PR is already open quite a long time and this should be easy to do as a follow-up.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok let’s go for follow-up issue

@MichaelBuessemeyer
Copy link
Contributor Author

MichaelBuessemeyer commented Mar 5, 2025

TODO

  • Test whether unfinished uploads without redis entries are not sent to the client as unfinished datasets uploads

@MichaelBuessemeyer
Copy link
Contributor Author

TODO

  • Test whether unfinished uploads without redis entries are not sent to the client as unfinished datasets uploads

Yep, looks good. I commented out the deletion of the db entry so it is kept but left the removal of the redis entries. => The failed upload did not show in the list of uploads to continue. But when I also commented out the removal of the redis entries, it showed up again :)

I triggered the "failed upload" in this scenario by sending a too small hard coded totalFileSizeInBytes in the reserve upload route call by the frontend

@MichaelBuessemeyer
Copy link
Contributor Author

Sooo I think this can go for another iteration ♻️

Copy link
Member

@fm3 fm3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just added two suggestions on wording

tuple(
"resumableChunkNumber" -> number,
"resumableChunkSize" -> number,
"resumableCurrentChunkSize" -> number,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fm3 this should be fine to add as a param as it is already sent by previous wklibs versions, correct @markbader?

Mroeover, now that I added this, your wklibs pr should hopefully be working now 👍.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, should be fine. The old libs do send this already.
If someone lies about this number, the chunk request may wrongly pass, but finishUpload should ultimately be rejected, which seems fine to me. :shipit:

@MichaelBuessemeyer MichaelBuessemeyer enabled auto-merge (squash) March 26, 2025 12:15
@MichaelBuessemeyer MichaelBuessemeyer merged commit 0f42d6e into master Mar 26, 2025
3 checks passed
@MichaelBuessemeyer MichaelBuessemeyer deleted the enforce-storage-quota-before-upload branch March 26, 2025 12:27
@coderabbitai coderabbitai bot mentioned this pull request Sep 12, 2025
18 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enforce Storage Quota already before upload
3 participants