The data in this challenge is provided in .nii.gz/.mha format with paired PSMA and FDG images for each case.
for each case, a subdirectory for the two tracers with relevant image data and annotations are given (CT, PET in units of SUV, and Total Tumor Burden). The threshold used for contouring each PET series is given in the threshold.json file. Only voxels above this value should be considered for labelling of disease. For PSMA, all cases use the same threshold value of 3, while FDG uses a variable liver-based value typically ranging from 2.5-5. We provide the output of Total Segmentator on the CT as well as a rigid registration parameter file which may be used to coarsely align PSMA and FDG images. To be eligible for submission, no other training data or pre-trained models should be utilised in this challenge.
There is potential for participants to investigate the use of deformable image registration for better lesion level alignment which may improve the performance of a multi-domain approach. It was beyond the scope of our data preparation to validate a non-rigid registration method which could be reliably considered for all cases however we can provide some template code for efficient registration in either SimpleITK or Elastix. Be conscious that there is a 30 minute wall time limit for evaluation and ensure that all inference as well as pre-/post-processing operations can complete in a modest time frame. This is the maximum time to perform any pre-processing as well as segmentation of both PSMA and FDG PET/CT image sets for a case.
data structure:
train_0001
PSMA
CT.nii.gz
PET.nii.gz
TTB.nii.gz
threshold.json
totseg_24.nii.gz
rigid.tfm - (FDG to PSMA)
FDG
CT.nii.gz
PET.nii.gz
TTB.nii.gz
threshold.json
totseg_24.nii.gz
rigid.tfm - (PSMA to FDG)
train_0002
train_0003
...
At time of evaluation, data will be provided at the case level (eg train_0001 with PSMA and FDG subdirectories) with the task to generate the output TTB label for each tracer. Template docker evaluation code is available here with basic instructions to update with your algorithm method and package for submission.
Note: There are some differences to the file structure when evaluating in the submission docker environment. Fundamentally the available data is the same, but the file naming and, in particular, the format of the rigid registration data are stored differently in terms of file naming and extensions. Likely, the most straightforward method to adapt your algorithm is to insert the updated inference function(s) into the interf0_handler() in our example dockerfile/algorithm on github. This provides all of the image & label data in SITK format along with the appropriate threshold (float) and rigid registration (SITK Euler 3D transform) objects.