Skip to content

Change SupervisedTrainer decollation default. #8542

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: dev
Choose a base branch
from

Conversation

id-b3
Copy link

@id-b3 id-b3 commented Aug 19, 2025

Fixes #8541 .

Description

By default, the SupervisedTrainer has decollation enabled by default. The decollation step moves the tensors to CPU for processing which significantly impacts the speed of each training step as described in issue #8541

This pull request proposes to change the default of decollation to False and provide more information in the documentation as to the impact of the decollation option with regard to the training speed.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

Copy link

coderabbitai bot commented Aug 19, 2025

Walkthrough

  • monai/engines/trainer.py: Changed SupervisedTrainer init signature default decollate from True to False and updated docstring to reflect the new default and rationale.
  • tests/apps/deepgrow/transforms/test_deepgrow_interaction.py: Updated SupervisedTrainer initialization to pass decollate=True.
  • tests/integration/test_deepedit_interaction.py: Updated SupervisedTrainer initialization to pass decollate=True.
  • tests/testing_data/config_fl_train.json: Added "decollate": true under train.trainer.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Assessment against linked issues

Objective Addressed Explanation
Change SupervisedTrainer default decollate to False or warn about slowdown (#8541)

Assessment against linked issues: Out-of-scope changes

None found.

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@id-b3 id-b3 force-pushed the supervised-trainer-decollation-default-change branch from 4afd580 to acd8ceb Compare August 19, 2025 11:55
@id-b3 id-b3 marked this pull request as ready for review August 19, 2025 17:09
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
monai/engines/trainer.py (1)

121-124: Docstring tweak: tighten rationale and guidance.

Small wording improvement; also nudge users when they actually need it.

-            default to `False` as training slows due to tensor movement to CPU for decollation when enabled.
+            default is `False` to avoid the CPU transfer overhead introduced by decollation; enable only when your
+            postprocessing requires list-based transforms from `monai.transforms`.
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Knowledge Base: Disabled due to Reviews > Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between cafc1fe and acd8ceb.

📒 Files selected for processing (4)
  • monai/engines/trainer.py (2 hunks)
  • tests/apps/deepgrow/transforms/test_deepgrow_interaction.py (1 hunks)
  • tests/integration/test_deepedit_interaction.py (1 hunks)
  • tests/testing_data/config_fl_train.json (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit Configuration File

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • tests/apps/deepgrow/transforms/test_deepgrow_interaction.py
  • tests/integration/test_deepedit_interaction.py
  • monai/engines/trainer.py
🔇 Additional comments (4)
tests/testing_data/config_fl_train.json (1)

122-124: Config: explicitly enabling decollation is correct for this test bundle.

The added "decollate": true pairs with monai.engines.SupervisedTrainer under train.trainer and matches the new default change. No JSON syntax issues.

monai/engines/trainer.py (1)

157-157: Audit complete – no missing decollate= needed
No Python call sites pass a postprocessing= without also specifying decollate=, and the only config referencing SupervisedTrainer doesn’t use postprocessing. Defaulting decollate=False is safe as-is.

tests/integration/test_deepedit_interaction.py (1)

106-106: LGTM: enabling decollation here is necessary.

Post transforms (Activationsd, AsDiscreted, SplitPredsLabeld, ToTensord) operate per-item; explicitly setting decollate=True preserves existing semantics under the new default.

tests/apps/deepgrow/transforms/test_deepgrow_interaction.py (1)

81-81: LGTM: explicit decollate=True keeps interaction transforms correct.

Iteration transforms include ToNumpyd/ToTensord and discrepancy ops that expect item-wise dicts; this explicit flag is appropriate with the default flipped.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Default for SupervisedTrainer slows down training almost 2x
1 participant