You are here

Medical Retrieval Task

Overview

The medical retrieval task of ImageCLEF 2010 will use a similar database to 2008 and 2009 but with a larger number of images. The data set contains all images from articles published in Radiology and Radiographics including the text of the captions and a link to the html of the full text articles. Over 77,000 images are currently available.
There will be three types of tasks in 2010:

  • Modality Classification: This is a new sub-task that will be introduced in 2010. Previous studies have shown that imaging modality is an important aspect of the image for medical retrieval. In user-studies, clinicians have indicated that modality is one of the most important filters that they would like to be able to limit their search by. Many image retrieval websites (Goldminer, Yottalook) allow users to limit the search results to a particular modality. However, this modality is typically extracted from the caption and is often not correct or present. Studies have shown that the modality can be extracted from the image itself using visual features. Additionally, using the modality classification, the search results can be improved significantly. Thus, in 2010, the first sub-task will be a modality classification. Participants will be provided a training set of 2000 images that have been classified into one of 8 modalities (CT, MR, XR etc). Participants will then be given a test set of 2000 images. Additionally, participants will be requested to provide modality classification for all 77,500 images. The measure used for this sub-task will be classification accuracy. The results of the modality classification can be used to filter the search in the next sub-task
    Note: participants who submit at least one run will be provided the results of the classification on this data set
  • Ad-hoc retrieval : This is the classic medical retrieval task, similar to those in organized in 2005-2009. Participants will be given a set of 15 textual queries with 2-3 sample images for each query. The queries will be classified into textual, mixed and semantic, based on the methods that are expected to yield the best results.
  • Case-based retrieval: This task was first introduced in 2009. This is a more complex task, but one that we believe is closer to the clinical workflow. In this task, a case description, with patient demographics, limited symptoms and test results including imaging studies, is provided (but not the final diagnosis). The goal is to retrieve cases including images that might best suit the provided case description. Unlike the ad-hoc task, the unit of retrieval here is a case, not an image. For the purposes of this task, a "case" is a PubMed ID corresponding to the journal article. In the results submissions the article URL should be used as several articles do not have PubMed IDs.

Schedule:

  • 16.2.2010: registration opens for all ImageCLEF tasks, and the copyright agreement can be found here
  • 6.4.2010: data release
  • 26.4.2010: topic release for the modality classification task
  • http://skynet.ohsu.edu/iclef10med/modality (Please refer to the README for details)

  • 13.5.2010: topic release for the retrieval tasks
  • 21.6.2010: submission of runs
  • 20.7.2010: release of results
  • 15.8.2010 : submission of working notes papers
  • 20.09.2010-23.09.2010: CLEF 2010 Conference, Padova, Italy

Data Download

Our database distribution includes an XML file with the image id, the captions of the images, the titles of the journal articles in which the image had appeared and the PubMed ID of the journal article. In addition, a compressed file containing the over 77,000 images will be provided.
The data is now available for download at http://skynet.ohsu.edu/iclef10med/ https://dmice.ohsu.edu/iclef/iclef10med/
Please login using the user id and password provided during registration.

Topics

We will provide 15 ad-hoc topics, divided into visual, mixed and semantic topic types.
We will also provide 15 case-based topics, where the retrieval unit is a case, not an image.

Data Submission

Please ensure that your submissions are compliant with the trec_eval format prior to submission.
We will reject any runs that do not meet the required format.
Also, please note that each group is allowed a maximum of 10 runs for image-based and case-based topics each. The qrels will be distributed among the participants, so further runs can be evaluated for the working notes papers by the participants.
Do not hesitate to ask if you have questions regarding the trec_eval format.

At the time of submission, the following information about each run will be requested. Please let us know if you would like clarifications on how to classify your runs.

1. What was used for the retrieval: Image, text or mixed (both)
2. Was other training data used?
3. Run type: Automatic, Manual, Interactive
4. Query Language

trec_eval format

The format for submitting results is based on the trec_eval program (http://trec.nist.gov/trec_eval/) as follows:

1 1 27431 1 0.567162 OHSU_text_1
1 1 27982 2 0.441542 OHSU_text_1
.............
1 1 52112 1000 0.045022 OHSU_text_1
2 1 43458 1 0.9475 OHSU_text_1
.............
25 1 28937 995 0.01492 OHSU_text_1

where:

  • The first column contains the topic number, in our case from 1-15 (or 16-30 for the case-based topics)
  • The second column is always 1
  • The third column is the image identifier without the extension jpg and without any image path (or the full article URL for the case-based topics)
  • The fourth column is the ranking for the topic (1-1000)
  • The fifth column is the score assigned by the system
  • The sixth column is the identifier for the run and should be the same in the entire file

Several key points for submitted runs are:

  • The topic numbers should be consecutive and complete from 1-16 (or 17-30 for the case based topics)
  • Case-based and image-based topics have to be submitted in separate files
  • The score should be in decreasing order (i.e. the image at the top of the list should have a higher score than images at the bottom of the list)
  • Up to (but not necessarily) 1000 images can be submitted for each topic.
  • Each topic must have at least one image.
  • Each run must be submitted in a single file. Files should be pure text files and not be zipped or otherwise compressed.

Contact

Please contact Jayashree Kalpathy-Cramer or Henning Müller for any questions about this task.