Skip to content

Conversation

fm3
Copy link
Member

@fm3 fm3 commented Dec 10, 2024

Dataset arrays can define a fill value. When chunks don’t exist in a location, a virtual chunk filled with that fill value is created and used.

This PR introduces a single-entry cache for this fill value chunk (per array/mag), so that we don’t have to create thousands of them in datasets where a lot of chunks are missing. Most datasets have a uniform chunk shape (and even those that don’t usually only have a few deviating chunks at the border), so this single-entry cache allows re-using the same allocated fill-value chunk many times.

Before this PR, a test dataset with a flat chunk shape, a large bounding box, and very sparse existing chunks would lead to so many all-black all-identical chunks being allocated after hitting “find data” that the GC was overwhelmed. Also, the allocation and GC take time so that the route took multiple minutes. Both problems are gone now.

“find data” is still not super fast in this case, but that’s somewhat expected due to the almost 4000 sampled positions and the bad chunk shape that makes many partial array copys necessary. It’s definitely better than before and uses way less memory.

Steps to test:

Issues:


@fm3 fm3 self-assigned this Dec 10, 2024
Copy link
Contributor

coderabbitai bot commented Dec 10, 2024

📝 Walkthrough
📝 Walkthrough

Walkthrough

The pull request introduces several enhancements and bug fixes for the WEBKNOSSOS application. Key changes include the addition of a tooltip displaying dataset volume, performance optimizations for data loading, and renaming of "resolution" to "magnification." The system now supports renaming datasets with duplicate names and improved skeleton tree colors. Functionality for training AI models with bounding boxes has been added, and various bugs related to dataset permissions, uploads, and synchronization have been resolved. Additionally, support for HTTP API versions 3 and 4 has been removed.

Changes

File Change Summary
CHANGELOG.unreleased.md Updated to reflect new features, performance optimizations, bug fixes, and API version removal.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkReader.scala Modified read method to use cached fill value creation methods, maintaining error handling.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkTyper.scala Introduced caching mechanism for filled arrays, updated method visibility, and added concurrency support.

Assessment against linked issues

Objective Addressed Explanation
The backend should not go out of memory (#[8269])

Possibly related PRs

Suggested labels

bug, frontend, automerge

Poem

🐇 In the land of data, where numbers play,
New features hop in, brightening the day.
Tooltips now sparkle, showing volumes vast,
With optimizations, our performance is fast!
Bugs are now bunnies, all fixed with care,
Hooray for the changes, let’s dance in the air! 🎉


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between aa223df and 2d70528.

📒 Files selected for processing (1)
  • CHANGELOG.unreleased.md (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • CHANGELOG.unreleased.md

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Experiment)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (3)
CHANGELOG.unreleased.md (1)

15-15: Enhance the changelog entry with more details about performance improvements.

While the entry correctly documents the optimization, it would be more helpful to users if it included:

  1. The specific benefits (memory and CPU savings from reusing fill-value chunks)
  2. An explicit mention of fixing the OOM issue (#8269)

Consider expanding the entry like this:

-Optimized performance of data loading with "fill value" chunks. [#8271](https://github.com/scalableminds/webknossos/pull/8271)
+Optimized performance of data loading by reusing allocated fill-value chunks, significantly reducing memory allocation and CPU overhead. Also fixes OOM issues when viewing certain datasets. [#8271](https://github.com/scalableminds/webknossos/pull/8271), fixes [#8269](https://github.com/scalableminds/webknossos/issues/8269)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkReader.scala (1)

33-33: LGTM! Consider adding performance monitoring.

The change to use createFromFillValueCached in the regular path completes the caching optimization strategy.

Consider adding metrics to monitor:

  • Cache hit/miss rates
  • Memory usage before/after chunk creation
  • Time spent in chunk creation

This will help validate the optimization's effectiveness and tune cache parameters if needed.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkTyper.scala (1)

33-36: Review cache capacity and key strategy for fillValueChunkCache

The cache fillValueChunkCache is initialized with a maxCapacity of 1. While this may reduce memory usage, consider whether a single-entry cache suffices for all use cases. If different chunkShape values are frequently requested, the cache might experience constant evictions, reducing its effectiveness.

Also, using chunkShape.mkString(",") as the cache key could lead to redundant entries if chunkShape arrays with identical dimensions but different instances are used. Consider normalizing or hashing the chunkShape to create a more consistent and efficient cache key.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between f909097 and aa223df.

📒 Files selected for processing (3)
  • CHANGELOG.unreleased.md (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkReader.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkTyper.scala (4 hunks)
🔇 Additional comments (6)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkReader.scala (1)

29-29: LGTM! Optimization aligns with PR objectives.

The change to use createFromFillValueCached in the shortcut path implements the caching strategy for fill-value chunks, which should help reduce memory allocation and GC pressure.

Let's verify the caching implementation:

✅ Verification successful

Let's gather more information about the caching implementation since the previous search didn't reveal the specific cache for fill values.


Let's search for more details about the cache implementation and configuration.


Cache implementation verified and properly configured

The caching implementation is correctly set up using AlfuCache with a capacity of 1, which is appropriate since fill-value chunks with the same shape can be reused. The cache:

  • Uses chunk shape as the key (converted to string)
  • Properly handles failures by removing failed entries
  • Caches successful fill-value chunk creations
  • Is thread-safe through the Fox monad
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for the cache configuration and implementation
# Expected: Find cache initialization and configuration

# Look for cache configuration
rg -A 5 "AlfuCache.*fill.*value"

# Look for cache size/expiry settings
rg "cache.*size|cache.*expiry|cache.*evict"

Length of output: 4015


Script:

#!/bin/bash
# Look for the ChunkTyper class and its caching implementation
ast-grep --pattern 'class ChunkTyper {
  $$$
  createFromFillValueCached($$$) {
    $$$
  }
  $$$
}'

# Search for fill value caching related code
rg -A 5 "createFromFillValueCached|fillValueCache"

Length of output: 2535


Script:

#!/bin/bash
# Look for fillValueChunkCache initialization and configuration
rg -A 10 "fillValueChunkCache"

# Look for the cache type/implementation
rg -A 5 "getOrLoad.*Fox"

Length of output: 3763

webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkTyper.scala (5)

3-5: Imports added for caching functionality

The imports AlfuCache and Fox are introduced to implement the new caching mechanism for filled arrays. These additions are appropriate and necessary for the functionality.


14-14: Import ExecutionContext for asynchronous operations

Adding scala.concurrent.ExecutionContext is essential since the caching method createFromFillValueCached requires an implicit ExecutionContext. This import is correctly included.


37-39: Ensure thread safety of AlfuCache in concurrent environments

The method createFromFillValueCached uses fillValueChunkCache with an implicit ExecutionContext, suggesting concurrent use. Verify that AlfuCache is thread-safe and can handle concurrent access without issues such as race conditions or inconsistent states.


40-40: Access modifier changed to protected for createFromFillValue

Changing createFromFillValue to protected restricts its access to subclasses. Ensure that this method is not required by external classes. This change promotes encapsulation and adheres to good object-oriented design principles.


142-142: Override createFromFillValue in ShortcutChunkTyper with custom implementation

In ShortcutChunkTyper, the method createFromFillValue is overridden to accommodate the specific requirements of the shortcut typer, particularly setting the flatShape. This override is appropriate, and the implementation aligns with the class's intended functionality.

Copy link
Contributor

@MichaelBuessemeyer MichaelBuessemeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the effort to investigate this @fm3 🙏

Your changes lgtm and testing also worked well / as described (a failing find my data request)

@fm3 fm3 merged commit c18c5e3 into master Dec 11, 2024
3 checks passed
@fm3 fm3 deleted the perf-fill-value branch December 11, 2024 09:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Backend goes OOM when viewing special datasets
2 participants