You are here

Liver CT Annotation

Welcome to the Liver CT Annotation challenge!

News

  • 06.11.2014: Web page for the challenge is open.
  • 23.01.2015: Development data is released.
  • 13.03.2015: Test data is available.
  • 13.03.2015: New version of ontology (Lico-v011) is available here.
  • 24.03.2015: Participants who are interested in using Lico-v011: Training data generated by this version is available now. (TrainingData2015-Licov011.zip)
  • 21.05.2015: Processed submission results are released.

Schedule

  • 01.11.2014: registration opens.(Register here)
  • 01.01.2015: Development data is released (instructions to access the data)
  • 15.03.2015: Test data is released and submission system is open.
  • 01.05.2015:Submission system is closed.
  • 21.05.2015: Processed submission results are released. (See the scores)
  • 07.06.2015: Deadline for submission of working notes papers by the participants
  • 17.06.2015: Notification of acceptance of the working note papers
  • 08.-11.09.2014: CLEF 2015, Toulouse, France

Results

Liver CT annotation Challenge
Groups
# Group Completeness Accuracy Total Score
1 Nedjar imane 0.99 0.84 0.91


Runs
# Group SCORE TOTAL Run name
1 Nedjar imane 0.904 TestCase_x
2 Nedjar imane 0.902 M2TestCase_x
3 Nedjar imane 0.910 M3TestCase_x


Ratio of correct answers in separate groups
Group Nedjar imane_1 Nedjar imane_2 Nedjar imane_3
Liver 0.925 0.925 0.925
Vessel 1.000 1.000 1.000
LesionArea 0.730 0.746 0.753
LesionLesion 0.470 0.470 0.480
LesionComponent 0.870 0.844 0.889

Task overview

The medical, specifically the radiological, databases present challenges due to the exponential increase in data volumes. Radiological images contain a rich source of meta-data associated with the images. Subtle differences between images can be critically important. Domain specific structured reporting in radiology is useful in accurately reflecting the interpretation of images, which can be used to distinguish among critical differences. Such reporting would improve the clinical workflow by means of facilitating standardized reports as well as boost the performance of search and retrieval from radiological databases for the purpose of comparative diagnosis, medical education, etc.

This task focuses on computer-aided structured reporting. The aim is to improve computer-aided automatic annotation of liver CT volumes by filling in a structured radiology report. More specifically, the participants are provided with 3D segmented CT images of the liver (with VOIs marking lesion(s) of interest) and non-radiological meta-data, which they are expected to use to answer a set of multiple-choice questions regarding radiological observations. The observations will be described using an ontology* of liver cases, which specifies the concepts and relations of a liver patient case. 50 training cases with complete reports (annotations) and 10 test cases will be provided.




* The ontology of liver cases will include limited metadata about the cases, in addition to the liver CT annotations described by the ONLIRA ontology (Semantic Description of Liver CT Images: An Ontological Approach, Kokciyan N., et al. IEEE Journal of Biomedical & Health Informatics, July 2014, Vol.18/4, pp 1363-1369). ONLIRA is opensource and is available here


Registering for the task and accessing the data

Please register by following the instructions found in the main webpage of ImageCLEF 2015
webpage
.

Following the approval of registration for the task, the participants will be given access rights to download the data files together with README_liverCT.pdf, which includes the explanation of the contents of the data files.

Briefly, each data file contains:

  • CT: A 3D matrix of real-valued cropped CT data. The volume includes liver only.
  • LiverMask: A 3D binary-valued matrix of the same size as CT. It marks the liver voxels.
  • Lesion_VOI: A 6D vector of the coordinates of 2 diagonal corners of the lesion’s bounding box, in comma-separated string.
  • RDF file: Data generated using the LiCO ontology

NOTE: The data is collected as part of the CaReRa project using a web application for manual annotation. The participants may login to the CaReRa system in demo mode to browse the system to gain knowledge of the data gathering forms. The project webpage is accessible at:
www.vavlab.ee.boun.edu.tr --> Research --> CaReRa


Submission instructions

The submissions will be received through the
ImageCLEF 2015 system, going
to "Runs" and then "Submit run" and select track "ImageCLEF Liver CT Annotation".

The participants are asked to submit a Matlab data files named as “TestCase_X_UsE,mat”, which includes ONLY the “UsE” variable for the associated test case.

Evaluation methodology

The evaluation will be based on the completeness and accuracy of the computer annotations with reference to the manual annotations of the test dataset. The percentage of the total automatic annotations (the percentage of questions that could be automatically answered) and the percentage of correct annotations will be used for evaluation. In case of multiple (manual / reference) answers to questions, having one of them automatically selected / annotated by the participant’s system, will be sufficient to mark it as a correct annotation.


Organizers

Contact Person

  • Neda B. Marvasti, PhD student, Bogazici University, EE Dept., Istanbul, Turkey. neda.marvasti@boun.edu.tr