Skip to content

[BIBM‘24] This repository is the official implementation for "UniUSNet: A Promptable Framework for Universal Ultrasound Disease Prediction and Tissue Segmentation".

Notifications You must be signed in to change notification settings

Zehui-Lin/UniUSNet

Repository files navigation

UniUSNet: A Promptable Framework for Universal Ultrasound Disease Prediction and Tissue Segmentation

UniUSNet is a universal framework for ultrasound image classification and segmentation, featuring:

  • A novel promptable module for incorporating detailed information into the model's learning process.
  • Versatility across various ultrasound natures, anatomical positions, and input types. Proficiency in both segmentation and classification tasks
  • Strong generalization capabilities demonstrated through zero-shot and fine-tuning experiments on new datasets.

For more details, see the accompanying paper and Project Page,

UniUSNet: A Promptable Framework for Universal Ultrasound Disease Prediction and Tissue Segmentation
Zehui Lin, Zhuoneng Zhang, Xindi Hu, Zhifan Gao, Xin Yang, Yue Sun, Dong Ni, Tao Tan. BIBM, 2024.

Installation

  • Clone this repository.
git clone https://github.com/Zehui-Lin/UniUSNet.git
cd UniUSNet
  • Create a new conda environment.
conda create -n UniUSNet python=3.10
conda activate UniUSNet
  • Install the required packages.
pip install -r requirements.txt

Data

data
├── classification
│   └── UDIAT
│       ├── 0
│       │   ├── 000001.png
│       │   ├── ...
│       ├── 1
│       │   ├── 000100.png
│       │   ├── ...
│       ├── config.yaml
│       ├── test.txt
│       ├── train.txt
│       └── val.txt
│   └── ...
└── segmentation
    └── BUSIS
        ├── config.yaml
        ├── imgs
        │   ├── 000001.png
        │   ├── ...
        ├── masks
        │   ├── 000001.png
        │   ├── ...
        ├── test.txt
        ├── train.txt
        └── val.txt
    └── ...
  • Please refer to the data_demo folder for examples.

Training

We use torch.distributed for multi-GPU training (also supports single GPU training). To train the model, run the following command:

python -m torch.distributed.launch --nproc_per_node=1 --master_port=1234 omni_train.py --output_dir exp_out/trial_1 --prompt

Testing

To test the model, run the following command:

python -m torch.distributed.launch --nproc_per_node=1 --master_port=1234 omni_test.py --output_dir exp_out/trial_1 --prompt

Checkpoints

Pretrained Weights

To train your own model, please download the Swin Transformer backbone weights and place it in the pretrained_ckpt/ directory:

The folder structure should look like:

pretrained_ckpt
└── swin_tiny_patch4_window7_224.pth

Citation

If you find this work useful, please consider citing:

@inproceedings{lin2024uniusnet,
  title={UniUSNet: A Promptable Framework for Universal Ultrasound Disease Prediction and Tissue Segmentation},
  author={Lin, Zehui and Zhang, Zhuoneng and Hu, Xindi and Gao, Zhifan and Yang, Xin and Sun, Yue and Ni, Dong and Tan, Tao},
  booktitle={2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)},
  pages={3501--3504},
  year={2024},
  organization={IEEE}
}

Acknowledgements

This repository is based on the Swin-Unet repository. We thank the authors for their contributions.

About

[BIBM‘24] This repository is the official implementation for "UniUSNet: A Promptable Framework for Universal Ultrasound Disease Prediction and Tissue Segmentation".

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages