Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Init #1

Merged
merged 16 commits into from
Apr 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -158,3 +158,4 @@ cython_debug/
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
tb_logs/
39 changes: 39 additions & 0 deletions BRATS_LA_Pancreas/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Model weights and Visulaizations

You can download the learned weights and their results in the following links.
Dataset | Visulization | Weights
-----------|----------------|----------------
BRaTS 2018 | [[Download](https://mega.nz/folder/L4oQmAzL#OxnzxfbQZ8Oanr0iaKJ0qw)]| [[Download](https://mega.nz/file/O1w02BAS#BNK2p2QZ2x2MDE5HKEe7Xwms7J5DMPlyYBMnUEjjwQ8)]
LA (fold 0) | [[Download](https://mega.nz/folder/SpImhTRD#Rcyh4ernRwpWuma9EOLh0g)]| [[Download](https://mega.nz/file/TkIG0DDD#bTwRrWe_ht3hoitKWsH88LePoN0vokmBHYo8Xhsk8wU)]
Pancreas (fold 0) | [[Download](https://mega.nz/folder/2sAEiBSb#_apiwqhLrCT-RMDm5Qsg4g)]| [[Download](https://mega.nz/file/z84ijCzA#ZLW7_-qqpfY0qlwoJAbu6bD0hv29iiRBjuFpFe4Sdsc)]

# BRaTS 2018 dataset
Download the dataset: [Download](https://mega.nz/file/ShpgECbC#rpZr_lWFr5ZIk8vXe7AVCFEPLZIFxFg7NnQ-Vk16LrM)

## Training
```bash
python src/train_test.py -c brats-2018_lhunet_sgd
```
**Before running the code above please adjust the config's paths.**

## Inference
Make sure in the config file you set the `test_mode` variable to `true` and give the path to the model you want to test in the `ckpt_path`.
```bash
python src/train_test.py -c brats-2018_lhunet_sgd
```

# Pancreas and LA dataset
Download the pre-processed datasets with their splits: [Pancreas](https://mega.nz/file/21p3ATLY#IZuAzvqXD8CymZZibN2oqLhfK0nZrBx8hWyk76SZRNk), [LA](https://mega.nz/file/K84Q3RKK#XDKPoSeYerwPJC7mcVyiTOM-Ydfv3TckDnAKkhpEVdY)

## Training
```bash
python src/train_test.py -c {dataset}_lhunet_sgd -f {fold_number}
```
instead of `{dataset}` you should write `pancreas` or `la` according to which dataset you want to train. regarding the `{fold_number}` for the Pancreas dataset it ranges from 0 till 3 (four-fold cross validation) and for the LA dataset it ranges from 0 till 4 (five-fold cross validation)
**Before running the code above please adjust the config's paths.**

## Inference
Make sure in the config file you set the `just_test` variable to `true` and give the path to the model you want to test in the `ckpt_path`.
```bash
python src/train_test.py -c {dataset}_lhunet_sgd -f {fold_number}
```
124 changes: 124 additions & 0 deletions BRATS_LA_Pancreas/configs/brats-2018/lhunet/brats-2018_lhunet_sgd.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
is3d: true
checkpoints:
continue_training: false # continue training from a checkpoint (use it with ckpt_path)
test_mode: false # test mode (use it with ckpt_path)
ckpt_path: '/home/say26747/Desktop/best.ckpt'
save_nifty: true # save nifty files
dataset:
name: "brats2018_loader"
input_size: [128,128,128]
train:
params:
data_root: "/home/say26747/Desktop/datasets/Brats 2018" # path to the dataset
normalization: true
augmentation: true
p: 0.5

validation:
params:
data_root: "/home/say26747/Desktop/datasets/Brats 2018" # path to the dataset
normalization: true
augmentation: false

test:
params:
data_root: "/home/say26747/Desktop/datasets/Brats 2018" # path to the dataset
normalization: true
augmentation: false

test_split: 0.2
validation_split: 0.1
data_loader:
train:
batch_size: 2
shuffle: true
num_workers: 12
pin_memory: true
persistent_workers: true
validation:
batch_size: 1
shuffle: false
num_workers: 12
pin_memory: true
persistent_workers: true
test:
batch_size: 1
shuffle: false
num_workers: 12
pin_memory: false
persistent_workers: true
training:
optimizer:
name: 'SGD'
params:
lr: 0.01
momentum: 0.99
weight_decay : 0.00003
nesterov: true
criterion:
params:
dice_weight: 0.5
bce_weight: 0.5
scheduler:
name: 'CustomDecayLR'
params:
max_epochs: 1000
epochs: 1000

model:
name: 'lhUNET_SGD_DEV_test' # do not change the lhUNET name at the begining
params:
spatial_shapes: [128,128,128]
do_ds: false # Deep Supervision
in_channels: 4
out_channels: 4

# encoder params
cnn_kernel_sizes: [3,3]
cnn_features: [12,16]
cnn_strides: [2,2]
cnn_maxpools: [true, true]
cnn_dropouts: 0.0
cnn_blocks: nn # n: resunet, d: deformconv, b: basicunet
hyb_kernel_sizes: [3,3,3]
hyb_features: [32,64,128]
hyb_strides: [2,2,2]
hyb_maxpools: [true,true,true]
hyb_cnn_dropouts: 0.0
hyb_tf_proj_sizes: [64,32,0]
hyb_tf_repeats: [1,1,1]
hyb_tf_num_heads: [4,4,8]
hyb_tf_dropouts: 0.1
hyb_cnn_blocks: nnn # n: resunet, d: deformconv, b: basicunet
hyb_vit_blocks: SSC
hyb_skip_mode: "cat" # "sum" or "cat"
hyb_arch_mode: "residual" # sequential, residual, parallel, collective
hyb_res_mode: "sum" # "sum" or "cat"

# decoder params
dec_hyb_tcv_kernel_sizes: [5,5,5]
dec_cnn_tcv_kernel_sizes: [5,5]
dec_cnn_blocks: null
dec_tcv_bias: false
dec_hyb_tcv_bias: false
dec_hyb_kernel_sizes: null
dec_hyb_features: null
dec_hyb_cnn_dropouts: null
dec_hyb_tf_proj_sizes: null
dec_hyb_tf_repeats: null
dec_hyb_tf_num_heads: null
dec_hyb_tf_dropouts: null
dec_cnn_kernel_sizes: null
dec_cnn_features: null
dec_cnn_dropouts: null
dec_hyb_cnn_blocks: null
dec_hyb_vit_blocks: null
# dec_hyb_vit_sandwich: null
dec_hyb_skip_mode: null
dec_hyb_arch_mode: "collective" # sequential, residual, parallel, collective, sequential-lite
dec_hyb_res_mode: null

use_sliding_window: true
sliding_window_params:
overlap: 0.5
mode: 'gaussian'
113 changes: 113 additions & 0 deletions BRATS_LA_Pancreas/configs/la/lhunet/la_lhunet_sgd.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
is3d: true
find_lr: false
checkpoints:
continue_training: false # continue training from a checkpoint (use it with ckpt_path)
test_mode: false # test mode (use it with ckpt_path)
ckpt_path: '/home/say26747/Desktop/last.ckpt'
log_pictures: false
save_nifty: true

dataset:
name: "LA_heart_loader"
crop_size: [96,96,96]
path_to_data: "/home/say26747/Desktop/datasets/Left Atrial" # path to the dataset

datatype: 'LA'

check_val_every_n_epoch: 10

data_loader:
train:
batch_size: 2
shuffle: true
num_workers: 12
pin_memory: true
persistent_workers: true
validation:
batch_size: 1
shuffle: false
num_workers: 12
pin_memory: true
persistent_workers: true
test:
batch_size: 1
shuffle: false
num_workers: 12
pin_memory: false
persistent_workers: true

training:
optimizer:
name: 'SGD'
params:
lr: 0.01
momentum: 0.9
weight_decay : 0.00003
nesterov: true
criterion:
params:
dice_weight: 1.0
bce_weight: 1.0
scheduler:
name: 'CustomDecayLR'
params:
max_epochs: 250
epochs: 250

model:
name: 'lhUNET_SGD_LA' # do not change the lhUNET name at the begining
params: # the class arguments defined above
spatial_shapes: [96, 96, 96]
do_ds: false # Deep Supervision
in_channels: 1
out_channels: 2

# encoder params
cnn_kernel_sizes: [3,3]
cnn_features: [12,16]
cnn_strides: [2,2]
cnn_maxpools: [true, true]
cnn_dropouts: 0.0
cnn_blocks: nn # n: resunet, d: deformconv, b: basicunet
hyb_kernel_sizes: [3,3,3]
hyb_features: [32,64,128]
hyb_strides: [2,2,2]
hyb_maxpools: [true,true,true]
hyb_cnn_dropouts: 0.0
hyb_tf_proj_sizes: [64,32,0]
hyb_tf_repeats: [1,1,1]
hyb_tf_num_heads: [4,4,8]
hyb_tf_dropouts: 0.0
hyb_cnn_blocks: nnn # n: resunet, d: deformconv, b: basicunet
hyb_vit_blocks: SSC
hyb_skip_mode: "cat" # "sum" or "cat"
hyb_arch_mode: "residual" # sequential, residual, parallel, collective
hyb_res_mode: "sum" # "sum" or "cat"

# decoder params
dec_hyb_tcv_kernel_sizes: [5,5,5]
dec_cnn_tcv_kernel_sizes: [5,5]
dec_cnn_blocks: null
dec_tcv_bias: false
dec_hyb_tcv_bias: false
dec_hyb_kernel_sizes: null
dec_hyb_features: null
dec_hyb_cnn_dropouts: null
dec_hyb_tf_proj_sizes: null
dec_hyb_tf_repeats: null
dec_hyb_tf_num_heads: null
dec_hyb_tf_dropouts: null
dec_cnn_kernel_sizes: null
dec_cnn_features: null
dec_cnn_dropouts: null
dec_hyb_cnn_blocks: null
dec_hyb_vit_blocks: null
# dec_hyb_vit_sandwich: null
dec_hyb_skip_mode: null
dec_hyb_arch_mode: "collective" # sequential, residual, parallel, collective, sequential-lite
dec_hyb_res_mode: null

use_sliding_window: true
sliding_window_params:
overlap: 0.85
mode: 'gaussian'
Loading