Skip to main content

Evaluation

For evaluation of a model / framework, the script ./scripts/evaluation/run_evaluation.py is provided.

# for cpu
pixi run -e dev python scripts/run_evaluation.py

# for gpu
pixi run -e cuda python scripts/run_evaluation.py device="cuda:0"

Similar to the training scripts (refer to the config management section for details), it uses a hydra config for parameter setting. Values can be directly adapted in the .yaml file (./config/evaluation/evaluation_config.yaml) or overwritten in the console.

The main parameters are the following:

ParameterDescriptionOptions
dataset_name_inputName of clearml dataset to evaluate onPROSTATEx, BAMBERG
transformation_typeType of registration"rigid", "affine", "elastic", "joint"
registration_methodRegistration method"dl" / "sitk" / "tto" / "hyreg" / "custom"
interpolation_modeType of interpolation to be used"nearest" / "bilinear" / "bspline"
deviceHardware device to run evaluation on"cpu" / "cuda:0"
fast_runEvaluate on 2 volumes only"trye" / "false"

As can be seen in the config, each registration framework has its own parameters that can be set.