Skip to content

Conversation

@locastre
Copy link
Contributor

Submitting the SMIT CT Lung GTV segmentation model for mHub

@github-project-automation github-project-automation bot moved this to Submitted Implementations in MHub Submissions Mar 17, 2025
Copy link
Member

@LennyN95 LennyN95 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution 🚀

  • Can we avoid using conda and instead install all requirements with uv? Our base image comes with uv installed and a virtual environment set-up. However, it is suggested that you create your own virtual environment with uv, e.g., uv venv -p 3.10 .venv310
  • The contents of the meta.json are used to populate the model card on our website under mhub.ai/models. The more information, the better.
  • To test-build the model and to move forward with our test routine, an mhub.toml file needs to be created.

@locastre
Copy link
Contributor Author

@LennyN95 Thank you for the in-depth feedback! We're working to address the suggestions.

I have an additional question: when we upload our test data to Zenodo (for the mhub.toml), should we refer to the mHub DOI# 13785615 or create a new DOI from an independent/new entry?

@fedorov
Copy link
Member

fedorov commented Mar 18, 2025

I would recommend renaming the PR to "SMIT CT Lung GTV segmentation model" (or something similar).

@LennyN95 LennyN95 changed the title New Model Submission SMIT CT Lung GTV segmentation model Mar 19, 2025
@LennyN95
Copy link
Member

LennyN95 commented Mar 19, 2025

You're more than welcome. Thank you and your team for the great work.

I have an additional question: when we upload our test data to Zenodo (for the mhub.toml), should we refer to the mHub DOI# 13785615 or create a new DOI from an independent/new entry?

@locastre We mainly use Zenodo for it's reproducible storage mechanism, so you can create a new DOI for your sample & reference.

@LennyN95
Copy link
Member

@locastre FYI; There are also some errors in the compliance check that need to be resolved:

Errors:

@locastre locastre marked this pull request as ready for review April 21, 2025 16:19
@locastre
Copy link
Contributor Author

@LennyN95 Hopefully we're ready for your review; we've addressed your feedback and tested the model within the mHub framework. Thanks for your help with this process.

@LennyN95 LennyN95 moved this from Submitted Implementations to Under Review in MHub Submissions Apr 22, 2025
@LennyN95
Copy link
Member

Static Badge

Test Results
id: 7e53fc04-004d-4f0e-83cb-803fc3bd4b8f
name: MHub Test Report (default)
date: '2025-04-22 12:06:20'
missing_files:
- seg.dcm
summary:
  files_missing: 1
  files_extra: 0
  checks: {}
conclusion: false

@LennyN95
Copy link
Member

Thank you Jue for the great work!

Unfortunately, the test failed due to what looks like a GPU incompatibility (see below). We use NVIDIA RTX A6000 GPU for testing and so far all models worked fine - can you have a look? Ideally we can find a solution that applies to a wide range of customer hardware.

/app/.venv39/lib/python3.9/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2895.)
return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/app/.venv39/lib/python3.9/site-packages/torch/cuda/__init__.py:146: UserWarning:
NVIDIA RTX A6000 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA RTX A6000 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
val data size is  1
info: started to load weight:  trained_weights/model_Mhub.pt
info: model emb feature is :  96
info: Successfully loaded trained weights:  trained_weights/model_Mhub.pt
Traceback (most recent call last):
File "/app/models/msk_smit_lung_gtv/src/run_segmentation.py", line 252, in <module>
main()
File "/app/models/msk_smit_lung_gtv/src/run_segmentation.py", line 218, in main
val_data["pred"] = sliding_window_inference(val_inputs,
File "/app/models/msk_smit_lung_gtv/src/edit_inference_utils.py", line 120, in sliding_window_inference
importance_map = compute_importance_map(
File "/app/.venv39/lib/python3.9/site-packages/monai/data/utils.py", line 763, in compute_importance_map
importance_map = torch.ones(patch_size, device=device).float()
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Done in 19.371 seconds.

@locastre
Copy link
Contributor Author

@LennyN95 Hi Lenny, I added an explicit install command for torch==1.12.1+cu116 which should properly support sm_86. Let us know if this works. Thanks!

@LennyN95
Copy link
Member

Static Badge

Test Results
id: aa683d35-ae95-4dd2-bdb9-4c7bc81cfe9c
name: MHub Test Report (default)
date: '2025-04-23 09:37:41'
missing_files:
- seg.dcm
extra_files:
- 1.3.6.1.4.1.14519.5.2.1.1.11635178980898764572976586249071182079/smit.seg.dcm
summary:
  files_missing: 1
  files_extra: 1
  checks: {}
conclusion: false

@LennyN95
Copy link
Member

Thank you @locastre. The model now runs, we're almost there! The test is still failing, because the reference data doesn't match with the model output (see report). Please make sure, that the reference data looks like:

data/output_data/
`-- 1.3.6.1.4.1.14519.5.2.1.1.11635178980898764572976586249071182079
    `-- smit.seg.dcm

You can follow the steps here to prepare the output for the test procedure.

To increase the usability, could you change the filename from smit.seg.dcm to <model_name>.seg.dcm? We do not enforce this convention but it would make downstream automation a lot more convenient. Alternatively, rename the model and the folder consistently to smit.

@locastre
Copy link
Contributor Author

@LennyN95 thanks for the catch. I've updated the data in the test zip and renamed the output SEG to msk_smit_lung_gtv.seg.dcm in default.yml, as you suggested.

@LennyN95
Copy link
Member

Static Badge

Test Results
id: d5b328ee-037e-4909-9827-65e6e1c31a67
name: MHub Test Report (default)
date: '2025-04-23 19:08:01'
checked_files:
- file: msk_smit_lung_gtv.seg.dcm
  path: /app/data/output_data/1.3.6.1.4.1.14519.5.2.1.1.11635178980898764572976586249071182079/msk_smit_lung_gtv.seg.dcm
  checks:
  - checker: DicomsegContentCheck
    notes:
    - label: Segment Count
      description: The number of segments identified in the inspected dicomseg file.
      info: 1
summary:
  files_missing: 0
  files_extra: 0
  checks:
    DicomsegContentCheck:
      files: 1
conclusion: true

@LennyN95
Copy link
Member

@locastre Amazing, we now passed all tests - yay! Thank you for the great work!
Do you have 1-5 example images you would like to have on the website? You can post them here :)

@locastre
Copy link
Contributor Author

SceneView
@LennyN95 Jue provided this screencap of representative segmentation. Will this do?

@LennyN95
Copy link
Member

@locastre, perfect! I have one last request that I initially overlooked. Could you add the sample annotation to the default.yaml workflow? This addition will help our users understand how the input and output data are organized. It also gives us a chance to explain the role of each file and folder, which is quite self-explanatory in the case of DICOM. This is a fairly new feature I added, and we are gradually rolling it out for all legacy models as well. You can look at this model for more examples.

@locastre
Copy link
Contributor Author

@LennyN95 sample annotation added to end of default.yml

@github-project-automation github-project-automation bot moved this from Under Review to Ready for Testing in MHub Submissions Apr 30, 2025
@LennyN95 LennyN95 merged commit 82896b0 into MHubAI:main Apr 30, 2025
1 check passed
@github-project-automation github-project-automation bot moved this from Ready for Testing to Done in MHub Submissions Apr 30, 2025
@LennyN95
Copy link
Member

@locastre The model is now online. Amazing work!! FYI; I shortened the title to SMIT Self-supervised Lung GTV Segmentation, please let me know if you prefer an alternative version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants