8627 perceptual loss errors out after hitting the maximum number of downloads#8652
Conversation
…um-number-of-downloads' of github.com:virginiafdez/MONAI into 8627-perceptual-loss-errors-out-after-hitting-the-maximum-number-of-downloads
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdded Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can use OpenGrep to find security vulnerabilities and bugs across 17+ programming languages.OpenGrep is compatible with Semgrep configurations. Add an |
…-the-maximum-number-of-downloads
for more information, see https://pre-commit.ci
…um-number-of-downloads' of github.com:virginiafdez/MONAI into 8627-perceptual-loss-errors-out-after-hitting-the-maximum-number-of-downloads
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
monai/losses/perceptual.py (1)
290-293: Indexing bug in channel-wise slicing.
l_idx : i + r_idxshould bel_idx : r_idx. Current code produces incorrect slice ranges fori > 0.for i in range(input.shape[1]): l_idx = i * feats_per_ch r_idx = (i + 1) * feats_per_ch - results[:, i, ...] = feats_diff[:, l_idx : i + r_idx, ...].sum(dim=1) + results[:, i, ...] = feats_diff[:, l_idx:r_idx, ...].sum(dim=1)
🧹 Nitpick comments (3)
monai/losses/perceptual.py (3)
105-106: Addstacklevel=2to warning.Ensures warning points to caller's code, not this internal line.
if not channel_wise: - warnings.warn("MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.") + warnings.warn("MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.", stacklevel=2)
219-225: Documentcache_dirparameter in docstring.The
cache_dirparameter (line 232) is missing from the Args section.channel_wise: if True, the loss is returned per channel. Otherwise the loss is averaged over the channels. Defaults to ``False``. + cache_dir: path to cache directory to save the pretrained network weights. """
324-328: Documentcache_dirparameter in docstring.The
cache_dirparameter is missing from the Args section.verbose: if false, mute messages from torch Hub load function. + cache_dir: path to cache directory to save the pretrained network weights. """
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/perceptual.py(7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/perceptual.py
🧬 Code graph analysis (1)
monai/losses/perceptual.py (2)
monai/utils/enums.py (1)
StrEnum(68-90)monai/utils/module.py (1)
optional_import(315-445)
🪛 Ruff (0.14.7)
monai/losses/perceptual.py
102-104: Avoid specifying long messages outside the exception class
(TRY003)
106-106: No explicit stacklevel keyword argument found
Set stacklevel=2
(B028)
235-235: Unused lambda argument: a
(ARG005)
235-235: Unused lambda argument: b
(ARG005)
235-235: Unused lambda argument: c
(ARG005)
237-239: Avoid specifying long messages outside the exception class
(TRY003)
333-335: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: build-docs
- GitHub Check: packaging
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: quick-py3 (windows-latest)
🔇 Additional comments (1)
monai/losses/perceptual.py (1)
23-28: LGTM!Immutable tuple for allowed model names addresses the security concern from prior review.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
monai/losses/perceptual.py (2)
112-116: Incorrect error message.Message references "Adversarial Loss" but this is
PerceptualLoss.if network_type.lower() not in list(PercetualNetworkType): raise ValueError( - "Unrecognised criterion entered for Adversarial Loss. Must be one in: %s" + "Unrecognised criterion entered for Perceptual Loss. Must be one in: %s" % ", ".join(PercetualNetworkType) )
291-293: Off-by-one slicing bug.The slice
l_idx : i + r_idxshould bel_idx : r_idx. Thei +causes incorrect indices for channels after the first.l_idx = i * feats_per_ch r_idx = (i + 1) * feats_per_ch - results[:, i, ...] = feats_diff[:, l_idx : i + r_idx, ...].sum(dim=1) + results[:, i, ...] = feats_diff[:, l_idx:r_idx, ...].sum(dim=1)
♻️ Duplicate comments (1)
monai/losses/perceptual.py (1)
133-134: Missingcache_dirpropagation.
RadImageNetPerceptualSimilarityacceptscache_dirbut it's not passed here.elif "radimagenet_" in network_type: - self.perceptual_function = RadImageNetPerceptualSimilarity(net=network_type, verbose=False) + self.perceptual_function = RadImageNetPerceptualSimilarity(net=network_type, verbose=False, cache_dir=cache_dir)
🧹 Nitpick comments (4)
monai/losses/perceptual.py (4)
105-106: Addstacklevel=2to warning.Without explicit stacklevel, the warning will point to this line instead of the caller.
if not channel_wise: - warnings.warn("MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.") + warnings.warn("MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.", stacklevel=2)
120-123: Addstacklevel=2to warning.warnings.warn( - f"Setting cache_dir to {cache_dir}, this may change the default cache dir for all torch.hub calls." + f"Setting cache_dir to {cache_dir}, this may change the default cache dir for all torch.hub calls.", + stacklevel=2, )
219-225: Missingcache_dirin docstring.Per coding guidelines, all parameters should be documented.
Add to Args section:
cache_dir: path to cache directory to save the pretrained network weights.
324-328: Missingcache_dirin docstring.Add to Args section:
cache_dir: path to cache directory to save the pretrained network weights.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/perceptual.py(7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/perceptual.py
🪛 Ruff (0.14.7)
monai/losses/perceptual.py
102-104: Avoid specifying long messages outside the exception class
(TRY003)
106-106: No explicit stacklevel keyword argument found
Set stacklevel=2
(B028)
235-235: Unused lambda argument: a
(ARG005)
235-235: Unused lambda argument: b
(ARG005)
235-235: Unused lambda argument: c
(ARG005)
237-239: Avoid specifying long messages outside the exception class
(TRY003)
333-335: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: packaging
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: build-docs
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-py3 (3.11)
🔇 Additional comments (4)
monai/losses/perceptual.py (4)
23-28: LGTM!Valid model names centralized as a constant.
34-44: LGTM!New network types properly added to the enum.
235-244: LGTM!Model validation and hub loading properly configured with
trust_repo=Trueand explicit branch. The lambda disabling fork validation is an intentional workaround.
330-337: LGTM!Consistent with
MedicalNetPerceptualSimilarity: validates model names, uses:mainbranch, and includestrust_repo=True.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: Virginia Fernandez <61539159+virginiafdez@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
monai/losses/perceptual.py (1)
118-123: Add stacklevel to cache_dir warning.warnings.warn( - f"Setting cache_dir to {cache_dir}, this may change the default cache dir for all torch.hub calls." + f"Setting cache_dir to {cache_dir}, this may change the default cache dir for all torch.hub calls.", + stacklevel=2 )
🧹 Nitpick comments (1)
monai/losses/perceptual.py (1)
23-28: Consider frozenset for O(1) membership checks.While the tuple works fine for 3 items, a frozenset would provide O(1) lookups and better communicate intent for membership validation.
-HF_MONAI_MODELS = ( +HF_MONAI_MODELS = frozenset(( "medicalnet_resnet10_23datasets", "medicalnet_resnet50_23datasets", "radimagenet_resnet50", -) +))
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/perceptual.py(7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/perceptual.py
🧬 Code graph analysis (1)
monai/losses/perceptual.py (2)
monai/utils/enums.py (1)
StrEnum(68-90)monai/utils/module.py (1)
optional_import(315-445)
🪛 Ruff (0.14.8)
monai/losses/perceptual.py
102-104: Avoid specifying long messages outside the exception class
(TRY003)
106-106: No explicit stacklevel keyword argument found
Set stacklevel=2
(B028)
235-235: Unused lambda argument: a
(ARG005)
235-235: Unused lambda argument: b
(ARG005)
235-235: Unused lambda argument: c
(ARG005)
237-239: Avoid specifying long messages outside the exception class
(TRY003)
333-335: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: build-docs
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: packaging
- GitHub Check: quick-py3 (macOS-latest)
🔇 Additional comments (4)
monai/losses/perceptual.py (4)
236-239: Validation against HF_MONAI_MODELS is correct.Good explicit validation with helpful error message.
241-243: Model loading looks correct.
trust_repo=Trueis appropriate for the hardcoded Project-MONAI repository, andcache_diris properly propagated.
332-337: Validation and loading are correct.Good validation against
HF_MONAI_MODELSand propercache_dirpropagation. Note that this class doesn't disable fork validation (unlikeMedicalNetPerceptualSimilarityline 235), which is the preferred approach.
143-144: No action required. The LPIPS library does not expose acache_dirparameter in its constructor. The original lpips package automatically caches pretrained model files under a default directory (typically~/.lpips). The instantiation at line 144 is correct as written and is consistent with the LPIPS library's design.Likely an incorrect or invalid review comment.
…um-number-of-downloads' of github.com:virginiafdez/MONAI into 8627-perceptual-loss-errors-out-after-hitting-the-maximum-number-of-downloads
….ac.uk> I, Virginia Fernandez <virginia.fernandez@kcl.ac.uk>, hereby add my Signed-off-by to this commit: b1e4a50 I, Virginia Fernandez <virginia.fernandez@kcl.ac.uk>, hereby add my Signed-off-by to this commit: fa0639b I, Virginia Fernandez <virginia.fernandez@kcl.ac.uk>, hereby add my Signed-off-by to this commit: 5594bfe I, Virginia Fernandez <virginia.fernandez@kcl.ac.uk>, hereby add my Signed-off-by to this commit: c99e16e I, Virginia Fernandez <virginia.fernandez@kcl.ac.uk>, hereby add my Signed-off-by to this commit: e3be8de Signed-off-by: Virginia Fernandez <virginia.fernandez@kcl.ac.uk>
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (3)
monai/losses/perceptual.py (3)
235-235: Lambda monkey-patch bypasses security validation.Disabling
_validate_not_a_forked_repomodifies global state and circumvents torch.hub's fork validation. Consider documenting rationale or usingtrust_repoparameter instead.
35-35: Class name typo still present.Past review marked this as addressed, but
PercetualNetworkTypeshould bePerceptualNetworkType. This is a public API class. Fix requires updating all references.-class PercetualNetworkType(StrEnum): +class PerceptualNetworkType(StrEnum):
106-106: Add stacklevel to warning.- warnings.warn("MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.") + warnings.warn("MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.", stacklevel=2)
🧹 Nitpick comments (1)
monai/losses/perceptual.py (1)
121-123: Add stacklevel to warning for consistency.- warnings.warn( - f"Setting cache_dir to {cache_dir}, this may change the default cache dir for all torch.hub calls." - ) + warnings.warn( + f"Setting cache_dir to {cache_dir}, this may change the default cache dir for all torch.hub calls.", + stacklevel=2 + )
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/perceptual.py(8 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/perceptual.py
🧬 Code graph analysis (1)
monai/losses/perceptual.py (2)
monai/utils/enums.py (1)
StrEnum(68-90)monai/utils/module.py (1)
optional_import(315-445)
🪛 Ruff (0.14.8)
monai/losses/perceptual.py
102-104: Avoid specifying long messages outside the exception class
(TRY003)
106-106: No explicit stacklevel keyword argument found
Set stacklevel=2
(B028)
235-235: Unused lambda argument: a
(ARG005)
235-235: Unused lambda argument: b
(ARG005)
235-235: Unused lambda argument: c
(ARG005)
237-239: Avoid specifying long messages outside the exception class
(TRY003)
333-335: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: packaging
- GitHub Check: build-docs
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-os (macOS-latest)
🔇 Additional comments (6)
monai/losses/perceptual.py (6)
23-28: LGTM.Model name whitelist is correctly defined and serves the security validation purpose mentioned in past reviews.
128-144: LGTM.cache_dir correctly propagated to MedicalNet (line 131) and RadImageNet (line 134) constructors.
236-239: LGTM.Model name validation against HF_MONAI_MODELS whitelist is correctly implemented with clear error message.
241-244: LGTM.torch.hub.load correctly specifies
:mainbranch, includestrust_repo=True, and propagates cache_dir.
286-293: LGTM.Channel-wise slice indexing correctly uses
l_idx:r_idxto extract features per input channel.
330-337: LGTM.cache_dir parameter properly propagated (line 330, 336). Model name validation (lines 332-335) and torch.hub.load call (lines 336-337) correctly implemented with
:mainbranch andtrust_repo=True.
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/perceptual.py(9 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/perceptual.py
🧬 Code graph analysis (1)
monai/losses/perceptual.py (2)
monai/utils/enums.py (1)
StrEnum(68-90)monai/utils/module.py (1)
optional_import(315-445)
🪛 Ruff (0.14.8)
monai/losses/perceptual.py
102-104: Avoid specifying long messages outside the exception class
(TRY003)
110-110: Avoid specifying long messages outside the exception class
(TRY003)
236-238: Avoid specifying long messages outside the exception class
(TRY003)
332-334: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: build-docs
- GitHub Check: packaging
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-pytorch (2.7.1)
🔇 Additional comments (7)
monai/losses/perceptual.py (7)
23-28: LGTM! Constant provides security validation.Frozenset prevents modification and ensures only trusted model names are downloaded from the hub.
35-44: LGTM! Enum properly renamed and expanded.Typo fixed (PercetualNetworkType → PerceptualNetworkType) and new network types added to match available models.
112-116: LGTM! Validation logic is correct.Properly validates network_type against the enum members.
128-144: LGTM! cache_dir properly propagated where supported.MedicalNet and RadImageNet paths receive cache_dir; TorchVision and LPIPS paths correctly omit it as those classes don't support the parameter.
228-243: LGTM! Constructor properly implements validation and hub loading.Model name validated against HF_MONAI_MODELS, cache_dir propagated to torch.hub.load, and trust_repo=True set appropriately.
289-292: LGTM! Critical bug fix in channel-wise slicing.Corrected from
l_idx : i + r_idxtol_idx : r_idx. Sincer_idx = (i + 1) * feats_per_ch, the original code incorrectly addediagain.
329-336: LGTM! Constructor matches MedicalNet pattern.Validates model name, propagates cache_dir, and uses trust_repo=True with explicit branch specifier.
Signed-off-by: Virginia Fernandez <virginia.fernandez@kcl.ac.uk>
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
monai/losses/perceptual.py (1)
280-289: Channel-wise MedicalNet path: device/dtype mismatch risk; slice fix is correct.The new slice
feats_diff[:, l_idx:r_idx, ...]is the right fix for grouping features per input channel. However,resultsis currently allocated on the default CPU device and default dtype, which will break for GPU inputs or non-default dtypes.Consider:
- feats_diff: torch.Tensor = (feats_input - feats_target) ** 2 - if self.channel_wise: - results = torch.zeros( - feats_diff.shape[0], input.shape[1], feats_diff.shape[2], feats_diff.shape[3], feats_diff.shape[4] - ) + feats_diff: torch.Tensor = (feats_input - feats_target) ** 2 + if self.channel_wise: + results = torch.zeros( + feats_diff.shape[0], + input.shape[1], + feats_diff.shape[2], + feats_diff.shape[3], + feats_diff.shape[4], + dtype=feats_diff.dtype, + device=feats_diff.device, + ) for i in range(input.shape[1]): l_idx = i * feats_per_ch r_idx = (i + 1) * feats_per_ch results[:, i, ...] = feats_diff[:, l_idx:r_idx, ...].sum(dim=1)
♻️ Duplicate comments (1)
monai/losses/perceptual.py (1)
94-104: Fix typo in MedicalNet warning message (already flagged earlier).The warning string still has the
"supp, ort"typo; it should read “support”.if "medicalnet_" in network_type: @@ - if not channel_wise: - warnings.warn( - "MedicalNet networks supp, ort channel-wise loss. Consider setting channel_wise=True.", stacklevel=2 - ) + if not channel_wise: + warnings.warn( + "MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.", + stacklevel=2, + )
🧹 Nitpick comments (2)
monai/losses/perceptual.py (2)
31-40: Enum-backednetwork_typedefault looks good; consider reflecting it in the type hint.Using
PerceptualNetworkTypeas the default is nice for discoverability. For stricter typing, you might consider allowing the enum explicitly in the annotation (without changing runtime behavior), e.g.:- network_type: str = PerceptualNetworkType.alex, + network_type: str | PerceptualNetworkType = PerceptualNetworkType.alex,This remains backwards compatible because
StrEnuminherits fromstr.Also applies to: 77-88
212-224: Document the newcache_dirparameter in class docstrings.
cache_dirwas added toMedicalNetPerceptualSimilarityandRadImageNetPerceptualSimilaritybut isn’t mentioned in their docstrings; this breaks the “all parameters documented” guideline and can confuse users.Suggested docstring updates:
class MedicalNetPerceptualSimilarity(nn.Module): @@ Args: net: {``"medicalnet_resnet10_23datasets"``, ``"medicalnet_resnet50_23datasets"``} Specifies the network architecture to use. Defaults to ``"medicalnet_resnet10_23datasets"``. verbose: if false, mute messages from torch Hub load function. channel_wise: if True, the loss is returned per channel. Otherwise the loss is averaged over the channels. - Defaults to ``False``. + Defaults to ``False``. + cache_dir: optional path to a cache directory used by ``torch.hub`` for downloading and caching weights. @@ - Args: - net: {``"radimagenet_resnet50"``} - Specifies the network architecture to use. Defaults to ``"radimagenet_resnet50"``. - verbose: if false, mute messages from torch Hub load function. - """ + Args: + net: {``"radimagenet_resnet50"``} + Specifies the network architecture to use. Defaults to ``"radimagenet_resnet50"``. + verbose: if false, mute messages from torch Hub load function. + cache_dir: optional path to a cache directory used by ``torch.hub`` for downloading and caching weights. + """Also applies to: 227-231, 313-323, 325-325
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/perceptual.py(9 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/perceptual.py
🧬 Code graph analysis (1)
monai/losses/perceptual.py (2)
monai/utils/module.py (1)
optional_import(315-445)monai/utils/enums.py (1)
StrEnum(68-90)
🪛 Ruff (0.14.8)
monai/losses/perceptual.py
97-99: Avoid specifying long messages outside the exception class
(TRY003)
107-107: Avoid specifying long messages outside the exception class
(TRY003)
235-235: Avoid specifying long messages outside the exception class
(TRY003)
328-328: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (2)
monai/losses/perceptual.py (2)
31-40: PerceptualNetworkType enum definition itself looks solid.Enum values are consistent with the supported backends and the docstring is now spelled correctly; no additional changes needed here.
131-133: RadImageNet hub loading and cache_dir propagation look correct.
RadImageNetPerceptualSimilaritynow loads fromProject-MONAI/perceptual-models:mainwithtrust_repo=Trueand forwardscache_dirintotorch.hub.load, matching the intended repository and allowing callers to control cache location.Also applies to: 317-331
| # Valid model name to download from the repository | ||
| HF_MONAI_MODELS = frozenset( | ||
| ("medicalnet_resnet10_23datasets", "medicalnet_resnet50_23datasets", "radimagenet_resnet50") | ||
| ) |
There was a problem hiding this comment.
Restrict model validation per family and guard 3D path to MedicalNet only.
As written, HF_MONAI_MODELS is shared by both MedicalNet and RadImageNet, and the 3D branch in PerceptualLoss always instantiates MedicalNetPerceptualSimilarity when spatial_dims == 3 and is_fake_3d is False, regardless of network_type. This leads to:
network_type="radimagenet_resnet50"withspatial_dims=3andis_fake_3d=Falsebeing passed intoMedicalNetPerceptualSimilarity, which will attempt to run a 2D RadImageNet backbone in a 3D MedicalNet path (shape/device errors at runtime instead of a clean validation failure).MedicalNetPerceptualSimilarityandRadImageNetPerceptualSimilarityboth accepting each other’s model names because they both useHF_MONAI_MODELSdirectly.
Recommend:
- Split the valid model-name set into MedicalNet-specific and RadImageNet-specific subsets and validate each class against its own subset.
- In the 3D branch, require that
network_typeis a MedicalNet variant before constructingMedicalNetPerceptualSimilarity, otherwise raise a clearValueError.
Example patch:
@@
-# Valid model name to download from the repository
-HF_MONAI_MODELS = frozenset(
- ("medicalnet_resnet10_23datasets", "medicalnet_resnet50_23datasets", "radimagenet_resnet50")
-)
+# Valid model names to download from the repository
+HF_MONAI_MODELS = frozenset(
+ ("medicalnet_resnet10_23datasets", "medicalnet_resnet50_23datasets", "radimagenet_resnet50")
+)
+HF_MONAI_MEDICALNET_MODELS = frozenset(
+ ("medicalnet_resnet10_23datasets", "medicalnet_resnet50_23datasets")
+)
+HF_MONAI_RADIMAGENET_MODELS = frozenset(("radimagenet_resnet50",))
@@
- # If spatial_dims is 3, only MedicalNet supports 3D models, otherwise, spatial_dims=2 and fake_3D must be used.
- if spatial_dims == 3 and is_fake_3d is False:
- self.perceptual_function = MedicalNetPerceptualSimilarity(
- net=network_type, verbose=False, channel_wise=channel_wise, cache_dir=cache_dir
- )
+ # If spatial_dims is 3, only MedicalNet supports 3D models; other networks must use the fake 3D path.
+ if spatial_dims == 3 and is_fake_3d is False:
+ if "medicalnet_" not in network_type:
+ raise ValueError(
+ "Only MedicalNet networks support 3D perceptual loss with is_fake_3d=False; "
+ f"got network_type={network_type!r}."
+ )
+ self.perceptual_function = MedicalNetPerceptualSimilarity(
+ net=network_type, verbose=False, channel_wise=channel_wise, cache_dir=cache_dir
+ )
@@
- if net not in HF_MONAI_MODELS:
- raise ValueError(f"Invalid download model name '{net}'. Must be one of: {', '.join(HF_MONAI_MODELS)}.")
+ if net not in HF_MONAI_MEDICALNET_MODELS:
+ raise ValueError(
+ f"Invalid MedicalNet model name '{net}'. Must be one of: {', '.join(HF_MONAI_MEDICALNET_MODELS)}."
+ )
@@
- if net not in HF_MONAI_MODELS:
- raise ValueError(f"Invalid download model name '{net}'. Must be one of: {', '.join(HF_MONAI_MODELS)}.")
+ if net not in HF_MONAI_RADIMAGENET_MODELS:
+ raise ValueError(
+ f"Invalid RadImageNet model name '{net}'. Must be one of: {', '.join(HF_MONAI_RADIMAGENET_MODELS)}."
+ )Also applies to: 94-107, 125-133, 234-239, 325-331
Signed-off-by: Yun Liu <yunl@nvidia.com>
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
.github/workflows/pythonapp.yml (2)
102-102: Cleanup runs only on Linux in multi-OS job—either apply consistently or document why it's Linux-only.The quick-py3 job runs on windows-latest, macOS-latest, and ubuntu-latest, but this cleanup executes only inside the Linux-conditional step. If toolcache cleanup is necessary for test reproducibility, apply it to all OSes with OS-specific paths. If Linux-only is intentional, add a comment explaining why.
Also consider moving this out of the "Install itk pre-release" step—it's unrelated to ITK installation and belongs in its own cleanup step.
48-48: Add inline comment explaining disk space mitigation.The cleanup removes non-Python toolcache entries to prevent "no space left" errors on GitHub Actions runners. Clarify this in an inline comment since the rationale currently exists only in the commit message, not in the code itself.
Also applies to: 102-102, 153-153, 235-235
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
.github/workflows/pythonapp.yml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: build-docs
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: packaging
Signed-off-by: Yun Liu <yunl@nvidia.com>
Signed-off-by: Yun Liu <yunl@nvidia.com>
There was a problem hiding this comment.
- Removed the cache weekly timestamp step from the packaging job.
- Removed the heavyweight dependency installs that were triggering OSError: [Errno 28] No space left on device:
- Stopped installing the source archive with "[all]" extras (avoids large optional deps like itk, mlflow, clearml, lpips, etc.).
- Stopped installing requirements-dev.txt in the packaging quick-test phase; switched to requirements-min.txt + tests/min_tests.py.
- Switched packaging to CPU-only PyTorch on Ubuntu using the PyTorch CPU index, and installed it with --no-cache-dir to avoid caching huge wheels.
- Reintroduced a controlled pip cache for packaging:
Cache path is pip cache dir.
Cache key uses hashFiles('requirements*.txt', 'setup.cfg', 'pyproject.toml') so it updates when dependencies change.
Kept torch installs --no-cache-dir to reduce disk pressure, while allowing smaller deps (e.g., requirements-min.txt) to benefit from caching.
cc @ericspod
|
/build |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
setup.py (1)
150-150: Replacecast(Any, ...)with explicit type annotation forpackage_data.The
cast(Any, ...)works around incomplete setuptools type stubs. Instead, assign to a typed variable:package_data: dict[str, list[str]] = {"monai": ["py.typed", *jit_extension_source]}Then pass
package_datatosetup(). This provides proper type safety without suppressing all type checks.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
setup.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
setup.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: packaging
- GitHub Check: build-docs
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-os (windows-latest)
🔇 Additional comments (1)
setup.py (1)
19-19: Import is required for line 150.
|
/build |
…-the-maximum-number-of-downloads Signed-off-by: Rafael Garcia-Dias <rafaelagd@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
monai/losses/perceptual.py (1)
223-227:⚠️ Potential issue | 🟡 MinorDocument new
cache_dirarg and raised exceptions in docstrings.
cache_dirwas added to both constructors but is not documented in the Args sections;ValueErrorbehavior is also undocumented.As per coding guidelines, Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings.
Also applies to: 230-236, 323-327, 329-329
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/losses/perceptual.py` around lines 223 - 227, Update the Google-style docstrings for the PerceptualLoss constructors (e.g., __init__ in the PerceptualLoss and any sibling classes/functions mentioned around lines ~223–329) to document the new cache_dir parameter in the Args section (type, default behavior, and purpose) and to list the ValueError exceptions raised (include conditions when ValueError is thrown, e.g., invalid net string or missing pretrained weights) in the Raises section; ensure the Returns/Attributes and channel_wise/verbose descriptions remain intact and mirror the existing style used elsewhere in the module.
♻️ Duplicate comments (2)
monai/losses/perceptual.py (2)
106-108:⚠️ Potential issue | 🟡 MinorFix typo in warning text.
Line 107 has
"supp, ort".Proposed fix
- "MedicalNet networks supp, ort channel-wise loss. Consider setting channel_wise=True.", stacklevel=2 + "MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.", stacklevel=2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/losses/perceptual.py` around lines 106 - 108, The warning text in the warnings.warn call in perceptual.py contains a typo ("supp, ort"); update the message string used in that warnings.warn invocation (the one mentioning MedicalNet networks and channel_wise=True) to read "support" instead of "supp, ort" so the full message reads clearly (e.g., "MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.").
22-25:⚠️ Potential issue | 🟠 MajorRestrict model-family routing and validation.
radimagenet_resnet50can currently enter the real-3D MedicalNet path (Line 130) and also passes MedicalNetnetvalidation (Line 238) because both classes use sharedHF_MONAI_MODELS. This should fail fast with a clear error.Proposed fix
-HF_MONAI_MODELS = frozenset( - ("medicalnet_resnet10_23datasets", "medicalnet_resnet50_23datasets", "radimagenet_resnet50") -) +HF_MONAI_MODELS = frozenset( + ("medicalnet_resnet10_23datasets", "medicalnet_resnet50_23datasets", "radimagenet_resnet50") +) +HF_MONAI_MEDICALNET_MODELS = frozenset(("medicalnet_resnet10_23datasets", "medicalnet_resnet50_23datasets")) +HF_MONAI_RADIMAGENET_MODELS = frozenset(("radimagenet_resnet50",)) @@ - if spatial_dims == 3 and is_fake_3d is False: + if spatial_dims == 3 and is_fake_3d is False: + if not network_type.startswith("medicalnet_"): + raise ValueError( + "Only MedicalNet networks support real 3D perceptual loss (spatial_dims=3, is_fake_3d=False)." + ) self.perceptual_function = MedicalNetPerceptualSimilarity( net=network_type, verbose=False, channel_wise=channel_wise, cache_dir=cache_dir ) @@ - if net not in HF_MONAI_MODELS: - raise ValueError(f"Invalid download model name '{net}'. Must be one of: {', '.join(HF_MONAI_MODELS)}.") + if net not in HF_MONAI_MEDICALNET_MODELS: + raise ValueError( + f"Invalid MedicalNet model name '{net}'. Must be one of: {', '.join(HF_MONAI_MEDICALNET_MODELS)}." + ) @@ - if net not in HF_MONAI_MODELS: - raise ValueError(f"Invalid download model name '{net}'. Must be one of: {', '.join(HF_MONAI_MODELS)}.") + if net not in HF_MONAI_RADIMAGENET_MODELS: + raise ValueError( + f"Invalid RadImageNet model name '{net}'. Must be one of: {', '.join(HF_MONAI_RADIMAGENET_MODELS)}." + )Also applies to: 130-137, 238-239, 331-332
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/losses/perceptual.py` around lines 22 - 25, HF_MONAI_MODELS is currently used for both MedicalNet and RadImageNet routing/validation which allows radimagenet_resnet50 to be treated as a MedicalNet model; split the single set into two explicit sets (e.g., HF_MEDICALNET_MODELS and HF_RADIMAGENET_MODELS), update the model-selection/routing logic that decides the "MedicalNet path" to check HF_MEDICALNET_MODELS only, and update the MedicalNet "net" validation to fail fast (raise ValueError) if the chosen model is not in HF_MEDICALNET_MODELS; replace all checks that currently reference HF_MONAI_MODELS (the model routing block and the MedicalNet validation logic) to use the appropriate new set so radimagenet_resnet50 cannot enter the MedicalNet path.
🧹 Nitpick comments (1)
monai/losses/perceptual.py (1)
99-112: Add negative tests for the new validation/routing rules.Please add tests for:
- non-MedicalNet +
channel_wise=True→ValueError- non-MedicalNet +
spatial_dims=3+is_fake_3d=False→ValueErrorAs per coding guidelines, Ensure new or modified definitions will be covered by existing or new unit tests.
Also applies to: 129-137
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/losses/perceptual.py` around lines 99 - 112, The new routing/validation in perceptual loss incorrectly lacks unit tests for non-MedicalNet branches: add negative tests that instantiate the relevant constructor/function (the code paths using the variables network_type, channel_wise, spatial_dims, is_fake_3d) and assert it raises ValueError for (1) non-MedicalNet network_type with channel_wise=True, and (2) non-MedicalNet with spatial_dims=3 and is_fake_3d=False; add analogous tests covering the duplicate validation block referenced around the second block (lines ~129-137) to ensure both validation locations raise the expected ValueError; keep tests focused on raising ValueError and include clear case names.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/pythonapp.yml:
- Line 129: Remove the unsupported "cache: 'pip'" input from the
actions/cache@v4 step: delete the `cache: 'pip'` line in the workflow and, if
you intended to use setup-python's pip caching, add the `cache: 'pip'` input to
the actions/setup-python@... step (the step that calls setup-python) instead;
ensure the actions/cache@v4 step only uses supported inputs for that action.
In `@monai/losses/perceptual.py`:
- Around line 100-101: Normalize the network_type variable once (e.g., assign
network_type = network_type.lower()) at the top of the decision/dispatch block
in perceptual.py so all subsequent checks and branches (references to
network_type in the "medicalnet_" check, the validation that calls
network_type.lower(), and the branches around lines handling network_type values
like the blocks that currently inspect network_type at lines ~100, ~114-118,
~130-138) operate on a consistent lowercase value; update any subsequent
comparisons to expect lowercase values and ensure you only normalize once before
any logic that reads network_type.
---
Outside diff comments:
In `@monai/losses/perceptual.py`:
- Around line 223-227: Update the Google-style docstrings for the PerceptualLoss
constructors (e.g., __init__ in the PerceptualLoss and any sibling
classes/functions mentioned around lines ~223–329) to document the new cache_dir
parameter in the Args section (type, default behavior, and purpose) and to list
the ValueError exceptions raised (include conditions when ValueError is thrown,
e.g., invalid net string or missing pretrained weights) in the Raises section;
ensure the Returns/Attributes and channel_wise/verbose descriptions remain
intact and mirror the existing style used elsewhere in the module.
---
Duplicate comments:
In `@monai/losses/perceptual.py`:
- Around line 106-108: The warning text in the warnings.warn call in
perceptual.py contains a typo ("supp, ort"); update the message string used in
that warnings.warn invocation (the one mentioning MedicalNet networks and
channel_wise=True) to read "support" instead of "supp, ort" so the full message
reads clearly (e.g., "MedicalNet networks support channel-wise loss. Consider
setting channel_wise=True.").
- Around line 22-25: HF_MONAI_MODELS is currently used for both MedicalNet and
RadImageNet routing/validation which allows radimagenet_resnet50 to be treated
as a MedicalNet model; split the single set into two explicit sets (e.g.,
HF_MEDICALNET_MODELS and HF_RADIMAGENET_MODELS), update the
model-selection/routing logic that decides the "MedicalNet path" to check
HF_MEDICALNET_MODELS only, and update the MedicalNet "net" validation to fail
fast (raise ValueError) if the chosen model is not in HF_MEDICALNET_MODELS;
replace all checks that currently reference HF_MONAI_MODELS (the model routing
block and the MedicalNet validation logic) to use the appropriate new set so
radimagenet_resnet50 cannot enter the MedicalNet path.
---
Nitpick comments:
In `@monai/losses/perceptual.py`:
- Around line 99-112: The new routing/validation in perceptual loss incorrectly
lacks unit tests for non-MedicalNet branches: add negative tests that
instantiate the relevant constructor/function (the code paths using the
variables network_type, channel_wise, spatial_dims, is_fake_3d) and assert it
raises ValueError for (1) non-MedicalNet network_type with channel_wise=True,
and (2) non-MedicalNet with spatial_dims=3 and is_fake_3d=False; add analogous
tests covering the duplicate validation block referenced around the second block
(lines ~129-137) to ensure both validation locations raise the expected
ValueError; keep tests focused on raising ValueError and include clear case
names.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 11c5ea8c-bb48-4391-9eb9-584b7767bab3
📒 Files selected for processing (2)
.github/workflows/pythonapp.ymlmonai/losses/perceptual.py
| with: | ||
| path: ${{ steps.pip-cache.outputs.dir }} | ||
| key: ${{ runner.os }}-packaging-pip-${{ hashFiles('requirements*.txt', 'setup.cfg', 'pyproject.toml') }} | ||
| cache: 'pip' |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify all actions/cache usages and unsupported "cache:" input nearby.
rg -n -C2 'uses:\s*actions/cache@v4|^\s+cache:\s*' .github/workflows/*.ymlRepository: Project-MONAI/MONAI
Length of output: 1607
Remove unsupported cache input from actions/cache@v4.
Line 129 uses cache: 'pip', which is not a valid input for actions/cache@v4. The cache input belongs to actions/setup-python. Remove it.
Proposed fix
- name: Cache pip (packaging)
uses: actions/cache@v4
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-packaging-pip-${{ hashFiles('requirements*.txt', 'setup.cfg', 'pyproject.toml') }}
- cache: 'pip'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cache: 'pip' | |
| - name: Cache pip (packaging) | |
| uses: actions/cache@v4 | |
| with: | |
| path: ${{ steps.pip-cache.outputs.dir }} | |
| key: ${{ runner.os }}-packaging-pip-${{ hashFiles('requirements*.txt', 'setup.cfg', 'pyproject.toml') }} | |
| restore-keys: | | |
| ${{ runner.os }}-packaging-pip- |
🧰 Tools
🪛 actionlint (1.7.11)
[error] 129-129: input "cache" is not defined in action "actions/cache@v4". available inputs are "enableCrossOsArchive", "fail-on-cache-miss", "key", "lookup-only", "path", "restore-keys", "save-always", "upload-chunk-size"
(action)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/pythonapp.yml at line 129, Remove the unsupported "cache:
'pip'" input from the actions/cache@v4 step: delete the `cache: 'pip'` line in
the workflow and, if you intended to use setup-python's pip caching, add the
`cache: 'pip'` input to the actions/setup-python@... step (the step that calls
setup-python) instead; ensure the actions/cache@v4 step only uses supported
inputs for that action.
| if "medicalnet_" in network_type: | ||
| if spatial_dims == 2 or is_fake_3d: |
There was a problem hiding this comment.
Normalize network_type once before all checks/dispatch.
Validation uses network_type.lower() (Line 114), but earlier/later logic uses raw network_type (Lines 100, 130, 134, 138). Mixed-case input can pass validation and still route incorrectly or fail later.
Proposed fix
super().__init__()
+ network_type = network_type.lower()
@@
- if "medicalnet_" in network_type:
+ if "medicalnet_" in network_type:
@@
- if network_type.lower() not in list(PerceptualNetworkType):
+ if network_type not in list(PerceptualNetworkType):
raise ValueError(
"Unrecognised criterion entered for Perceptual Loss. Must be one in: %s"
% ", ".join(PerceptualNetworkType)
)Also applies to: 114-118, 130-138
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@monai/losses/perceptual.py` around lines 100 - 101, Normalize the
network_type variable once (e.g., assign network_type = network_type.lower()) at
the top of the decision/dispatch block in perceptual.py so all subsequent checks
and branches (references to network_type in the "medicalnet_" check, the
validation that calls network_type.lower(), and the branches around lines
handling network_type values like the blocks that currently inspect network_type
at lines ~100, ~114-118, ~130-138) operate on a consistent lowercase value;
update any subsequent comparisons to expect lowercase values and ensure you only
normalize once before any logic that reads network_type.
Fixes #8627.
Moves the perceptual loss code to MONAI repository https://github.com/Project-MONAI/perceptual-models and the checkpoints to Huggingface.
Description
This PR changes and simplifies the torch.hub loading process and gets the models from Huggingface lirbary.
A few sentences describing the changes proposed in this pull request.
Types of changes
./runtests.sh -f -u --net --coverage../runtests.sh --quick --unittests --disttests.make htmlcommand in thedocs/folder.