-
Notifications
You must be signed in to change notification settings - Fork 29
Data loading: save mem + cpu by re-using allocated fill-value chunks #8271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 Walkthrough📝 WalkthroughWalkthroughThe pull request introduces several enhancements and bug fixes for the WEBKNOSSOS application. Key changes include the addition of a tooltip displaying dataset volume, performance optimizations for data loading, and renaming of "resolution" to "magnification." The system now supports renaming datasets with duplicate names and improved skeleton tree colors. Functionality for training AI models with bounding boxes has been added, and various bugs related to dataset permissions, uploads, and synchronization have been resolved. Additionally, support for HTTP API versions 3 and 4 has been removed. Changes
Assessment against linked issues
Possibly related PRs
Suggested labels
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (3)
CHANGELOG.unreleased.md (1)
15-15
: Enhance the changelog entry with more details about performance improvements.While the entry correctly documents the optimization, it would be more helpful to users if it included:
- The specific benefits (memory and CPU savings from reusing fill-value chunks)
- An explicit mention of fixing the OOM issue (#8269)
Consider expanding the entry like this:
-Optimized performance of data loading with "fill value" chunks. [#8271](https://github.com/scalableminds/webknossos/pull/8271) +Optimized performance of data loading by reusing allocated fill-value chunks, significantly reducing memory allocation and CPU overhead. Also fixes OOM issues when viewing certain datasets. [#8271](https://github.com/scalableminds/webknossos/pull/8271), fixes [#8269](https://github.com/scalableminds/webknossos/issues/8269)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkReader.scala (1)
33-33
: LGTM! Consider adding performance monitoring.The change to use
createFromFillValueCached
in the regular path completes the caching optimization strategy.Consider adding metrics to monitor:
- Cache hit/miss rates
- Memory usage before/after chunk creation
- Time spent in chunk creation
This will help validate the optimization's effectiveness and tune cache parameters if needed.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkTyper.scala (1)
33-36
: Review cache capacity and key strategy forfillValueChunkCache
The cache
fillValueChunkCache
is initialized with amaxCapacity
of1
. While this may reduce memory usage, consider whether a single-entry cache suffices for all use cases. If differentchunkShape
values are frequently requested, the cache might experience constant evictions, reducing its effectiveness.Also, using
chunkShape.mkString(",")
as the cache key could lead to redundant entries ifchunkShape
arrays with identical dimensions but different instances are used. Consider normalizing or hashing thechunkShape
to create a more consistent and efficient cache key.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (3)
CHANGELOG.unreleased.md
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkReader.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkTyper.scala
(4 hunks)
🔇 Additional comments (6)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkReader.scala (1)
29-29
: LGTM! Optimization aligns with PR objectives.
The change to use createFromFillValueCached
in the shortcut path implements the caching strategy for fill-value chunks, which should help reduce memory allocation and GC pressure.
Let's verify the caching implementation:
✅ Verification successful
Let's gather more information about the caching implementation since the previous search didn't reveal the specific cache for fill values.
Let's search for more details about the cache implementation and configuration.
Cache implementation verified and properly configured
The caching implementation is correctly set up using AlfuCache
with a capacity of 1, which is appropriate since fill-value chunks with the same shape can be reused. The cache:
- Uses chunk shape as the key (converted to string)
- Properly handles failures by removing failed entries
- Caches successful fill-value chunk creations
- Is thread-safe through the Fox monad
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for the cache configuration and implementation
# Expected: Find cache initialization and configuration
# Look for cache configuration
rg -A 5 "AlfuCache.*fill.*value"
# Look for cache size/expiry settings
rg "cache.*size|cache.*expiry|cache.*evict"
Length of output: 4015
Script:
#!/bin/bash
# Look for the ChunkTyper class and its caching implementation
ast-grep --pattern 'class ChunkTyper {
$$$
createFromFillValueCached($$$) {
$$$
}
$$$
}'
# Search for fill value caching related code
rg -A 5 "createFromFillValueCached|fillValueCache"
Length of output: 2535
Script:
#!/bin/bash
# Look for fillValueChunkCache initialization and configuration
rg -A 10 "fillValueChunkCache"
# Look for the cache type/implementation
rg -A 5 "getOrLoad.*Fox"
Length of output: 3763
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkTyper.scala (5)
3-5
: Imports added for caching functionality
The imports AlfuCache
and Fox
are introduced to implement the new caching mechanism for filled arrays. These additions are appropriate and necessary for the functionality.
14-14
: Import ExecutionContext
for asynchronous operations
Adding scala.concurrent.ExecutionContext
is essential since the caching method createFromFillValueCached
requires an implicit ExecutionContext
. This import is correctly included.
37-39
: Ensure thread safety of AlfuCache
in concurrent environments
The method createFromFillValueCached
uses fillValueChunkCache
with an implicit ExecutionContext
, suggesting concurrent use. Verify that AlfuCache
is thread-safe and can handle concurrent access without issues such as race conditions or inconsistent states.
40-40
: Access modifier changed to protected
for createFromFillValue
Changing createFromFillValue
to protected
restricts its access to subclasses. Ensure that this method is not required by external classes. This change promotes encapsulation and adheres to good object-oriented design principles.
142-142
: Override createFromFillValue
in ShortcutChunkTyper
with custom implementation
In ShortcutChunkTyper
, the method createFromFillValue
is overridden to accommodate the specific requirements of the shortcut typer, particularly setting the flatShape
. This override is appropriate, and the implementation aligns with the class's intended functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the effort to investigate this @fm3 🙏
Your changes lgtm and testing also worked well / as described (a failing find my data request)
Dataset arrays can define a fill value. When chunks don’t exist in a location, a virtual chunk filled with that fill value is created and used.
This PR introduces a single-entry cache for this fill value chunk (per array/mag), so that we don’t have to create thousands of them in datasets where a lot of chunks are missing. Most datasets have a uniform chunk shape (and even those that don’t usually only have a few deviating chunks at the border), so this single-entry cache allows re-using the same allocated fill-value chunk many times.
Before this PR, a test dataset with a flat chunk shape, a large bounding box, and very sparse existing chunks would lead to so many all-black all-identical chunks being allocated after hitting “find data” that the GC was overwhelmed. Also, the allocation and GC take time so that the route took multiple minutes. Both problems are gone now.
“find data” is still not super fast in this case, but that’s somewhat expected due to the almost 4000 sampled positions and the bad chunk shape that makes many partial array copys necessary. It’s definitely better than before and uses way less memory.
Steps to test:
Issues: