Downstream Fine-tuning
We will provide some information about the "hidden" downstream tasks that may have an impact on your pre-training. The downstream datasets can have multiple input channels. We repeat the input stem weights to fit the number of input channels of the downstream task. To verify that your checkpoints can be loaded correctly, you can use this function of our segmentation downstream finetuning repository. Note, that we do not load the weights of the decoder of ResEncL or the up_projection weights of PrimusM.
Segmentation
- brain extraction if the target is located in the brain
- z-score normalization
- cubic 1mm target spacing
- [160,160,160] patch size
We use this repository for segmentation fine-tuning. We will use the trainer "PretrainedTrainer_150ep" and "PretrainedTrainer_Primus_150ep"!
Classification
- cubic 1mm target spacing
- Apply HD-bet to find the center of the brain
- brain extraction
- Then, we use a [160,160,160] patch around this center for training
- z-score normalization
- checkout the preprocessing steps for the Abide dataset
We Use this repository for classification fine-tuning. The exact hyperparameters will vary between the datasets, but similar to the example yaml file in the readme. Publicly available datasets you could use to evaluate your pre-training: