You are here

ImageCLEFmed Tuberculosis



Welcome to the 3rd edition of the Tuberculosis Task!

Tuberculosis (TB) is a bacterial infection caused by a germ called Mycobacterium tuberculosis. About 130 years after its discovery, the disease remains a persistent threat and a leading cause of death worldwide according to WHO. This bacteria usually attacks the lungs, but it can also damage other parts of the body. Generally, TB can be cured with antibiotics. However, the different types of TB require different treatments, and therefore the detection of the TB type and the evaluation of the severity stage are two important tasks.

Lessons learned:

  • In the first and second editions of this task, held at ImageCLEF 2017 and ImageCLEF 2018, participants had to detect Multi-drug resistant patients (MDR subtask) and to classify the TB type (TBT subtaks) both based only on the CT image. After 2 editions we concluded that the MDR subtask was not possible based only on the image. In the TBT subtask, there was a slight improvement in 2018 with respect to 2017 on the classification results, however, not enough considering the amount of extra data provided in the 2018 edition, both in terms of new images and meta-data.
  • On the other hand, most of the participants obtained good results in the severity scoring (SVR) subtask introduced last year. Therefore, we decided to extend it this year.
  • From a medical point of view, the 3 subtasks proposed previously had a limited utility. The MDR subtask was finally not feasible, and the TBT and SVR subtasks are tasks that expert radiologists can perform in a relatively low time. This encouraged us to add a new subtask based on providing an automatic report of the patient, an outcome that can have a major impact in the clinical routines.
  • Finally, in previous editions each subtask required a different dataset. In this edition, both proposed subtasks share the same dataset.


  • 08.11.2018: Website goes live
  • 23.11.2018: Registration open at CrowdAI
  • 17.12.2018: Training data released at CrowdAI

Task description

Subtask #1: SVR - Severity scoring

This subtask is aimed at assessing TB severity score. The Severity score is a cumulative score of severity of TB case assigned by a medical doctor. Originally, the score varied from 1 ("critical/very bad") to 5 ("very good"). In the process of scoring, the medical doctors considered many factors like pattern of lesions, results of microbiological tests, duration of treatment, patient's age and some other. The goal of this subtask is to assess the severity based on the CT image and some additional meta-data, including disability, relapse, comorbidity, bacillary and smoking among others. The original severity score is included as training meta-data, but the final score that participants have to assess is reduced to a binary category: "LOW" (scores 4 and 5) and "HIGH" (scores 1, 2 and 3).

Subtask #2: CTR - CT report

In this subtasks the participants will have to generate an automatic report based on the CT image.
This report should include the following information in binary form (0 or 1): Left lung affected, right lung affected, presence of calcifications, presence of caverns, pleurisy, lung capacity decrease.


In this edition, both subtasks (SVR and CTR) use the same dataset containing 335 chest CT scans of TB patients along with a set of clinically relevant metadata. 218 patients are used for training and 117 for test. The selected metadata includes the following binary measures: disability, relapse, symptoms of TB, comorbidity, bacillary, drug resistance, higher education, ex-prisoner, alcoholic, smoking, severity.

For all patients we provide 3D CT images with an image size per slice of 512*512 pixels and number of slices varying from about 50 to 400. All the CT images are stored in NIFTI file format with .nii.gz file extension (g-zipped .nii files). This file format stores raw voxel intensities in Hounsfield units (HU) as well the corresponding image metadata such as image dimensions, voxel size in physical units, slice thickness, etc. A freely-available tool called "VV" can be used for viewing image files. Currently, there are various tools available for reading and writing NIFTI files. Among them there are load_nii and save_nii functions for Matlab and Niftilib library for C, Java, Matlab and Python.

Moreover, for all patients we provide automatic extracted masks of the lungs. This material can be downloaded together with the patients CT images. The details of this segmentation can be found here.
In case the participants use the provided masks in their experiments, please refer to the section "Citations" at the end of this page to find the appropriate citation for this lung segmentation technique.

Remarks on the automatic lung segmentation:

The segmentations were manually analysed based on statistics on number of lungs found and size ratio between right-left lung. Only those segmentations with anomalies on these statistics were visualized. The code used to segment the patients was adapted for the cases with unsatisfactory segmentation. After this proceeding, all patients with anomalies presented a satisfactory mask.

Evaluation methodology

Subtask #1: SVR - Severity scoring

This task will be evaluated as binary classification problem, including measures such as Area Under the ROC Curve (AUC) and accuracy.
The ranking of the techniques will be first based on the AUC and then by the accuracy.

Subtask #2: CTR - CT report

This task is considered a multi-binary classification problem (6 binary findings). Again measures including AUC and accuracy will be used to evaluate the task.
The ranking of this task will be done first by average AUC and then by min AUC (both over the 6 CT findings).

Preliminary Schedule

  • 05.11.2018: Registration opens for all ImageCLEF tasks (until 26.04.2019)
  • 17.12.2019: Training data released
  • 18.03.2019: Test data release starts
  • 01.05.2019: Deadline for submitting the participants runs
  • 10.05.2019: Release of the processed results by the task organizers
  • 24.05.2019: Deadline for submission of working notes papers by the participants
  • 14.06.2019: Notification of acceptance of the working notes papers
  • 29.06.2019: Camera-ready working notes papers
  • 09-12.09.2019CLEF 2019, Lugano, Switzerland

Participant registration

CrowdAI is shutting down and will move towards AICrowd. Please temporarily ignore the information below this paragraph. During the transition phase (until all challenges are migrated) we will have to provide the datasets and End User Agreement (EUA) handling ourselves. In order to get access to the dataset, please download the EUA at the bottom of this page and send a filled in and signed version to henning.mueller[at-character] Please refer to the ImageCLEF registration instructions to get some examples on how to fill in the EUA.

Please refer to the general ImageCLEF registration instructions

Submission instructions

Please note that each group is allowed a maximum of 10 runs per subtask.

Subtask #1: SVR - Severity scoring

Submit a plain text file named with the prefix SVR (e.g. SVRfree-text.txt) with the following format:

  • <Patient-ID>,<Probability of "HIGH" severity>


  • CTR_TST_001,0.93
  • CTR_TST_002,0.54
  • CTR_TST_003,0.1
  • CTR_TST_004,0.245
  • CTR_TST_005,0.7

Subtask #2: CTR - CT report

Submit a plain text file named with the prefix CTR (e.g. CTRfree-text.txt) with the following format:

  • <Patient-ID>,<Probability of "left lung affected">,<Probability of "right lung affected">,<Probability of "presence of calcifications">,<Probability of "presence of caverns">,<Probability of "pleurisy">,<Probability of "lung capacity decrease">


  • CTR_TST_001,0.93,0.2,0.655,0.01,0.3645,0.98
  • CTR_TST_002,0.54,0,1,0.25,0.2,0.598,0
  • CTR_TST_003,0.1,0.50,0.0,1.0,0.999,0.46
  • CTR_TST_004,0.245,0.12,0.23,0.34,0.45,0.68
  • CTR_TST_005,0.7,0.1,0,0,0,0

You need to respect the following constraints for both tasks:

  • Patient-IDs must be part of the predefined Patient-IDs
  • All patient-IDs must be present in the runfiles
  • Only use numbers between 0 and 1 for the probabilities. Use the dot (.) as a decimal point (no commas accepted)


DISCLAIMER : The results presented below have not yet been analyzed in-depth and are shown "as is". The results are sorted by descending AUC for SVR subtask and by descending mean AUC for CTR subtask.

Subtask #1: SVR - Severity scoring

Group name Run AUC Accuracy Rank
UIIP_BioMed SRV_run1_linear.txt 0.7877 0.7179 1
UIIP subm_SVR_Severity 0.7754 0.7179 2
HHU SVR_HHU_DBS2_run01.txt 0.7695 0.6923 3
HHU SVR_HHU_DBS2_run02.txt 0.7660 0.6838 4
UIIP_BioMed SRV_run2_less_features.txt 0.7636 0.7350 5
CompElecEngCU SVR_mlp-text.txt 0.7629 0.6581 6
San Diego VA HCS/UCSD SVR_From_Meta_Report1c.csv 0.7214 0.6838 7
San Diego VA HCS/UCSD SVR_From_Meta_Report1c.csv 0.7214 0.6838 8
MedGIFT SVR_SVM.txt 0.7196 0.6410 9
San Diego VA HCS/UCSD SVR_Meta_Ensemble.txt 0.7123 0.6667 10
San Diego VA HCS/UCSD SVR_LAstEnsembleOfEnsemblesReportCl.csv 0.7038 0.6581 11
UniversityAlicante SVR-SVM-axis-mode-4.txt 0.7013 0.7009 12
UniversityAlicante SVR-SVM-axis-mode-8.txt 0.7013 0.7009 13
UniversityAlicante SVR-MC-4.txt 0.7003 0.7009 14
UniversityAlicante SVR-MC-8.txt 0.7003 0.7009 15
San Diego VA HCS/UCSD SVRMetadataNN1_UTF8.txt 0.6956 0.6325 16
UIIP subm_SVR_Severity 0.6941 0.6496 17
UniversityAlicante SVR-LDA-axis-mode-4.txt 0.6842 0.6838 18
UniversityAlicante SVR-LDA-axis-mode-8.txt 0.6842 0.6838 19
UniversityAlicante SVR-SVM-axis-svm-4.txt 0.6761 0.6752 20
UniversityAlicante SVR-SVM-axis-svm-8.txt 0.6761 0.6752 21
MostaganemFSEI SVR_FSEI_run3_resnet_50_55.csv 0.6510 0.6154 22
UniversityAlicante SVR-LDA-axis-svm-4.txt 0.6499 0.6496 23
UniversityAlicante SVR-LDA-axis-svm-8.txt 0.6499 0.6496 24
MostaganemFSEI SVR_run8_lstm_5_55_sD_lungnet.csv 0.6475 0.6068 25
MedGIFT SVR_GNN_nodeCentralFeats_sc.csv 0.6457 0.6239 26
HHU run_6.csv 0.6393 0.5812 27
San Diego VA HCS/UCSD SVT_Wisdom.txt 0.6270 0.6581 28
SSN College of Engineering SVRtest-model1.txt 0.6264 0.6068 29
HHU run_8.csv 0.6258 0.6068 30
SSN College of Engineering SVRtest-model2.txt 0.6133 0.5385 31
University of Asia Pacific SVRfree-text.txt 0.6111 0.6154 32
MostaganemFSEI SVR_FSEI_run2_lungnet_train80_10slices.csv 0.6103 0.5983 33
HHU run_4.csv 0.6070 0.5641 34
SSN College of Engineering SVRtest-model3.txt 0.6067 0.5726 35
HHU run_7.csv 0.6050 0.5556 36
University of Asia Pacific SVRfree-text.txt 0.5704 0.5385 37
FIIAugt SVRab.txt 0.5692 0.5556 38
HHU run_3.csv 0.5692 0.5385 39
MostaganemFSEI SVR_FSEI_run6_fuson_resnet_lungnet_10slices.csv 0.5677 0.5128 40
MedGIFT SVR_GNN_node2vec.csv 0.5496 0.5726 41
MedGIFT SVR_GNN_nodeCentralFeats.csv 0.5496 0.4701 42
SSN College of Engineering SVRtest-model4.txt 0.5446 0.5299 43
HHU run_5.csv 0.5419 0.5470 44
HHU SVRbaseline_txt.txt 0.5103 0.4872 45
MostaganemFSEI SVR_FSEI_run4_semDesc_SVM_10slices.csv 0.5029 0.5043 46
MedGIFT SVR_GNN_node2vec_pca.csv 0.4933 0.4615 47
MostaganemFSEI SVR_run7_inception_resnet_v2_small_54_slices_70_30.csv 0.4933 0.4701 48
MostaganemFSEI SVR_FSEI_run5_contextDesc_RF_10slices.csv 0.4783 0.4957 49
MostaganemFSEI SVR_fsei_run0_resnet50_modelA.csv 0.4698 0.4957 50
MostaganemFSEI SVR_FSEI_run9_oneSVM_desSem_10slices_highclass.csv 0.4636 0.5214 51
HHU run_2.csv 0.4452 0.4530 52
MedGIFT SVR_GNN_node2vec_pca_sc.csv 0.4076 0.4274 53
MostaganemFSEI SVR_FSEI_run10_RandomForest_semDesc_10slices_removingOutilers.csv 0.3475 0.4615 54

Subtask #2: CTR - CT report

Group Name Run Mean AUC Min AUC Rank
UIIP_BioMed CTR_run3_pleurisy_as_SegmDiff.txt 0.7968 0.6860 1
UIIP_BioMed CTR_run2_2binary.txt 0.7953 0.6766 2
UIIP_BioMed CTR_run1_multilabel.txt 0.7812 0.6766 3
CompElecEngCU CTRcnn.txt 0.7066 0.5739 4
MedGIFT CTR_SVM.txt 0.6795 0.5626 5
San Diego VA HCS/UCSD CTR_Cor_32_montage.txt 0.6631 0.5541 6
HHU CTR_HHU_DBS2_run01.txt 0.6591 0.5159 7
HHU CTR_HHU_DBS2_run02.txt 0.6560 0.5159 8
San Diego VA HCS/UCSD CTR_ReportsubmissionEnsemble2.csv 0.6532 0.5904 9
UIIP subm_CT_Report 0.6464 0.4099 10
HHU CTR_HHU_DBS2_run03.txt 0.6429 0.4187 11
HHU CTR_run_1.csv 0.6315 0.5161 12
HHU CTR_run_2.csv 0.6315 0.5161 13
MostaganemFSEI CTR_FSEI_run1_lungnet_50_10slices.csv 0.6273 0.4877 14
UniversityAlicante svm_axis_svm.txt 0.6190 0.5366 15
UniversityAlicante mc.txt 0.6104 0.5250 16
MostaganemFSEI CTR_FSEI_lungNetA_54slices_70.csv 0.6061 0.4471 17
UniversityAlicante svm_axis_mode.txt 0.6043 0.5340 18
PwC CTR_results_meta.txt 0.6002 0.4724 19
UniversityAlicante lda_axis_mode.txt 0.5975 0.4860 20
San Diego VA HCS/UCSD TB_ReportsubmissionLimited1.csv 0.5811 0.4111 21
UniversityAlicante lda_axis_svm.txt 0.5787 0.4851 22
HHU CTR_run_3.txt.csv 0.5610 0.4477 23
PwC CTR_results.txt 0.5543 0.4275 24
LIST predictionCTReportSVC.txt 0.5523 0.4317 25
LIST predictionModelSimple.txt 0.5510 0.4709 26
MedGIFT CTR_GNN_nodeCentralFeats_sc.csv 0.5381 0.4299 27
LIST predictionCTReportLinearSVC.txt 0.5321 0.4672 28
MedGIFT CTR_GNN_node2vec_pca_sc.csv 0.5261 0.4435 29
LIST predictionModelAugmented.txt 0.5228 0.4086 30
MedGIFT CTR_GNN_nodeCentralFeats.csv 0.5104 0.4140 31
MostaganemFSEI CTR_FSEI_run5_SVM_semDesc_10slices.csv 0.5064 0.4134 32
MedGIFT CTR_GNN_node2vec_pca.csv 0.5016 0.2546 33
MostaganemFSEI CTR_FSEI_run4_SVMone_semDesc_10slices_negClass.csv 0.4937 0.4461 34
MostaganemFSEI CTR_FSEI_run3_SVMone_semDesc_10slices_posClass.csv 0.4877 0.3897 35


  • When referring to the ImageCLEFtuberculosis 2019 task general goals, general results, etc. please cite the following publication (also referred to as ImageCLEF tuberculosis task overview):
    • Yashin Dicente Cid, Vitali Liauchuk, Dzmitri Klimuk, Aleh Tarasau, Vassili Kovalev, Henning Müller, Overview of ImageCLEFtuberculosis 2019 - Automatic CT-based Report Generation and Tuberculosis Severity Assessment, CLEF working notes, CEUR, 2019.
    • BibTex:
        author = {Dicente Cid, Yashin and Liauchuk, Vitali and Klimuk, Dzmitri and Tarasau, Aleh and Kovalev, Vassili and M\"uller, Henning},
        title = {Overview of {ImageCLEFtuberculosis} 2019 - Automatic CT-based Report Generation and Tuberculosis Severity Assessment},
        booktitle = {CLEF2019 Working Notes},
        series = {{CEUR} Workshop Proceedings},
        year = {2019},
        volume = {},
        publisher = { $<$$>$},
        pages = {},
        month = {September 9-12},
        address = {Lugano, Switzerland}
    • When referring to the ImageCLEF 2019 lab general goals, general results, etc. please cite the following publication which will be published by September 2019 (also referred to as ImageCLEF general overview):
      • Bogdan Ionescu, Henning Müller, Renaud Péteri, Yashin Dicente Cid, Vitali Liauchuk, Vassili Kovalev, Dzmitri Klimuk, Aleh Tarasau, Asma Ben Abacha, Sadid A. Hasan, Vivek Datla, Joey Liu, Dina Demner-Fushman, Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Minh-Triet Tran, Mathias Lux, Cathal Gurrin, Obioma Pelka, Christoph M. Friedrich, Alba García Seco de Herrera, Narciso Garcia, Ergina Kavallieratou, Carlos Roberto del Blanco, Carlos Cuevas Rodríguez, Nikos Vasillopoulos, Konstantinos Karampidis, Jon Chamberlain, Adrian Clark, Antonio Campello, ImageCLEF 2019: Multimedia Retrieval in Medicine, Lifelogging, Security and Nature In: Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the 10th International Conference of the CLEF Association (CLEF 2019), Lugano, Switzerland, LNCS Lecture Notes in Computer Science, Springer (September 9-12 2019)
      • BibTex:
          author = {Bogdan Ionescu and Henning M\"uller and Renaud P\'{e}teri and Yashin Dicente Cid and Vitali Liauchuk and Vassili Kovalev and Dzmitri Klimuk and Aleh Tarasau and Asma Ben Abacha and Sadid A. Hasan and Vivek Datla and Joey Liu and Dina Demner-Fushman and Duc-Tien Dang-Nguyen and Luca Piras and Michael Riegler and Minh-Triet Tran and Mathias Lux and Cathal Gurrin and Obioma Pelka and Christoph M. Friedrich and Alba Garc\'ia Seco de Herrera and Narciso Garcia and Ergina Kavallieratou and Carlos Roberto del Blanco and Carlos Cuevas Rodr\'{i}guez and Nikos Vasillopoulos and Konstantinos Karampidis and Jon Chamberlain and Adrian Clark and Antonio Campello},
          title = {{ImageCLEF 2019}: Multimedia Retrieval in Medicine, Lifelogging, Security and Nature},
          booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction},
          series = {Proceedings of the 10th International Conference of the CLEF Association (CLEF 2019)},
          year = {2019},
          volume = {2380},
          publisher = {{LNCS} Lecture Notes in Computer Science, Springer},
          pages = {},
          month = {September 9-12},
          address = {Lugano, Switzerland}
    • When using the provided mask of the lungs , please cite the following publication:
      • Yashin Dicente Cid, Oscar A. Jiménez-del-Toro, Adrien Depeursinge, and Henning Müller, Efficient and fully automatic segmentation of the lungs in CT volumes. In: Goksel, O., et al. (eds.) Proceedings of the VISCERAL Challenge at ISBI. No. 1390 in CEUR Workshop Proceedings (Apr 2015)
      • BibTex:


          Title = {Efficient and fully automatic segmentation of the lungs in CT volumes},
          Booktitle = {Proceedings of the {VISCERAL} Anatomy Grand Challenge at the 2015 {IEEE ISBI}},
          Author = {Dicente Cid, Yashin and Jim{\'{e}}nez del Toro, Oscar Alfonso and Depeursinge, Adrien and M{\"{u}}ller, Henning},
          Editor = {Goksel, Orcun and Jim{\'{e}}nez del Toro, Oscar Alfonso and Foncubierta-Rodr{\'{\i}}guez, Antonio and M{\"{u}}ller, Henning},
          Keywords = {CAD, lung segmentation, visceral-project},
          Month = may,
          Series = {CEUR Workshop Proceedings},
          Year = {2015},
          Pages = {31-35},
          Publisher = {CEUR-WS},
          Location = {New York, USA}
    • Organizers

      • Yashin Dicente Cid <yashin.dicente(at)>, University of Applied Sciences Western Switzerland, Sierre, Switzerland
      • Vitali Liauchuk <vitali.liauchuk(at)>, Institute for Informatics, Minsk, Belarus
      • Vassili Kovalev <vassili.kovalev(at)>, Institute for Informatics, Minsk, Belarus
      • Henning Müller <henning.mueller(at)>, University of Applied Sciences Western Switzerland, Sierre, Switzerland