Evaluation
For evaluation of a model / framework, the script ./scripts/evaluation/run_evaluation.py is provided.
# for cpu
pixi run -e dev python scripts/run_evaluation.py
# for gpu
pixi run -e cuda python scripts/run_evaluation.py device="cuda:0"
Similar to the training scripts (refer to the config management section for details), it uses a hydra config for parameter setting. Values can be directly adapted in the .yaml file (./config/evaluation/evaluation_config.yaml) or overwritten in the console.
The main parameters are the following:
| Parameter | Description | Options |
|---|---|---|
dataset_name_input | Name of clearml dataset to evaluate on | PROSTATEx, BAMBERG |
transformation_type | Type of registration | "rigid", "affine", "elastic", "joint" |
registration_method | Registration method | "dl" / "sitk" / "tto" / "hyreg" / "custom" |
interpolation_mode | Type of interpolation to be used | "nearest" / "bilinear" / "bspline" |
device | Hardware device to run evaluation on | "cpu" / "cuda:0" |
fast_run | Evaluate on 2 volumes only | "trye" / "false" |
As can be seen in the config, each registration framework has its own parameters that can be set.