You are here

Medical Image Retrieval Task 2011

News

  • 29.7.2011 Results for the case-based retrieval task are released
  • 28.7.2011 Results for the image-based retrieval task are released
  • 26.7.2011 Results for the modality classification task are released
  • 15.4.2011 Topics for the medical retrieval tasks are released.
  • 10.4.2011 Training data released for the modality annotation task.
  • 17.3.2011 Data release for the Medical Image Retrieval task 2011.
  • 8.4.2011 Release of the training data for the modality classification task.

The medical retrieval task of ImageCLEF 2011 uses a subset of PubMed Central containing 231,000 images.
This task is a use case of the Promise network of excellence and supported by the project.

There will be three types of tasks in 2011:

  • Modality Classification:
    Previous studies have shown that imaging modality is an important aspect of the image for medical retrieval. In user-studies, clinicians have indicated that modality is one of the most important filters that they would like to be able to limit their search by. Many image retrieval websites (Goldminer, Yottalook) allow users to limit the search results to a particular modality. However, this modality is typically extracted from the caption and is often not correct or present. Studies have shown that the modality can be extracted from the image itself using visual features. Additionally, using the modality classification, the search results can be improved significantly. In difference to 2010 in 2011 the images will come from a larger variety of journals and the captions will not be of the same quality. The number of classes is foreseen to change adding more non-imaging-modality classes such as photograph, graphics, flow chart, etc.
    The measure used for this sub-task will be classification accuracy. The results of the modality classification can be used to filter the search in the next sub-task
    Note: participants who submit at least one run will be provided the results of the classification on this data set

    Class codes with descriptions (Class codes need to be specified in run files):
    [Class code : description]
    3D : 3d reconstruction
    AN : angiography
    CM : compound figure (more than one type of image)
    CT : computed tomography
    DM : dermatology
    DR : drawing
    EM : electronMicroscopy
    EN : endoscopic imaging
    FL : fluoresence
    GL : gel
    GX : graphs
    GR : gross pathology
    HX : histopathology
    MR : magnetic resonance imaging
    PX : general photo
    RN : retinography
    US : ultrasound
    XR : x-ray

  • Ad-hoc image-based retrieval :
    This is the classic medical retrieval task, similar to those in organized in 2005-2010. Participants will be given a set of 30 textual queries with 2-3 sample images for each query. The queries will be classified into textual, mixed and semantic, based on the methods that are expected to yield the best results.
  • Case-based retrieval:
    This task was first introduced in 2009. This is a more complex task, but one that we believe is closer to the clinical workflow. In this task, a case description, with patient demographics, limited symptoms and test results including imaging studies, is provided (but not the final diagnosis). The goal is to retrieve cases including images that might best suit the provided case description. Unlike the ad-hoc task, the unit of retrieval here is a case, not an image. For the purposes of this task, a "case" is a PubMed ID corresponding to the journal article. In the results submissions the article DOI should be used as several articles do not have PubMed IDs nor Article URLs.

Citations

  • When referring to ImageCLEFmed 2011 task general goals, general results, etc. please cite the following publication:
    • Jayashree Kalpathy-Cramer, Henning Müller, Steven Bedrick, Ivan Eggel, Alba García Seco de Herrera and Theodora Tsikrika, Overview of the CLEF 2011 medical image classification and retrieval tasks, in: CLEF 2011 working notes, Amsterdam, The Netherlands, 2011.
    • BibText:

      @InProceedings{KMB2011,
      Title = {The {CLEF} 2011 medical image retrieval and classification tasks},
      Author = {Kalpathy--Cramer, Jayashree and M\"uller, Henning and Bedrick, Steven and Eggel, Ivan and Garc\'ia Seco de Herrera, Alba and Tsikrika, Theodora},
      Booktitle = {Working Notes of CLEF 2011 (Cross Language Evaluation Forum)},
      Year = {2011},
      Month = {September},
      Location = {Amsterdam, The Netherlands}

    }

  • When referring to ImageCLEFmed task in general, please cite the following publication:
    • Jayashree Kalpathy-Cramer, Alba García Seco de Herrera, Dina Demner-Fushman, Sameer Antani, Steven Bedrick and Henning Müller, Evaluating Performance of Biomedical Image Retrieval Systems –an Overview of the Medical Image Retrieval task at ImageCLEF 2004-2014 (2014), in: Computerized Medical Imaging and Graphics
    • BibText:

      @Article{KGD2014,
        Title = {Evaluating Performance of Biomedical Image Retrieval Systems-- an Overview of the Medical Image Retrieval task at {ImageCLEF} 2004--2014},
        Author = {Kalpathy--Cramer, Jayashree and Garc\'ia Seco de Herrera, Alba and Demner--Fushman, Dina and Antani, Sameer and Bedrick, Steven and M\"uller, Henning},
        Journal = {Computerized Medical Imaging and Graphics},
        Year = {2014}

      }

  • Schedule:

    • 1.2.2010: registration opens for all ImageCLEF tasks, and the copyright agreement is ready
    • 17.3.2011: data release
    • 8.4.2011: topic release for the modality classification task
    • 15.4.2011: topic release for the retrieval tasks
    • 9.6.2011: submission of runs
    • 26.-31.7.2011: release of results
    • 14.8.2011: submission of working notes papers
    • 20.09.2011-23.09.2011: CLEF 2011 Conference, Amsterdam, The Netherlands

    Data Download

    Our database distribution includes an XML file with the image id, the captions of the images, the titles of the journal articles in which the image had appeared and the PubMed ID of the journal article. In addition, a compressed file containing the over 77,000 images will be provided.

    Topics

    We will provide 15 ad-hoc topics, divided into visual, mixed and semantic topic types.
    We will also provide 15 case-based topics, where the retrieval unit is a case, not an image.

    Data Submission

    Image-based and case-based retrieval

    Please ensure that your submissions are compliant with the trec_eval format prior to submission.
    We will reject any runs that do not meet the required format.
    Also, please note that each group is allowed a maximum of 10 runs for image-based and case-based topics each. The qrels will be distributed among the participants, so further runs can be evaluated for the working notes papers by the participants.
    Do not hesitate to ask if you have questions regarding the trec_eval format.

    At the time of submission, the following information about each run will be requested. Please let us know if you would like clarifications on how to classify your runs.

    1. What was used for the retrieval: Image, text or mixed (both)
    2. Was other training data used?
    3. Run type: Automatic, Manual, Interactive
    4. Query Language

    trec_eval format

    The format for submitting results is based on the trec_eval program (http://trec.nist.gov/trec_eval/) as follows:

    1 1 27431 1 0.567162 OHSU_text_1
    1 1 27982 2 0.441542 OHSU_text_1
    .............
    1 1 52112 1000 0.045022 OHSU_text_1
    2 1 43458 1 0.9475 OHSU_text_1
    .............
    25 1 28937 995 0.01492 OHSU_text_1

    where:

    • The first column contains the topic number, in our case from 1-30 (or 31-40 for the case-based topics)
    • The second column is always 1
    • The third column is the image identifier (IRI) without the extension jpg and without any image path (or the full article DOI for the case-based topics)
    • The fourth column is the ranking for the topic (1-1000)
    • The fifth column is the score assigned by the system
    • The sixth column is the identifier for the run and should be the same in the entire file

    Several key points for submitted runs are:

    • The topic numbers should be consecutive and complete from 1-30 (or 31-40 for the case based topics)
    • Case-based and image-based topics have to be submitted in separate files
    • The score should be in decreasing order (i.e. the image at the top of the list should have a higher score than images at the bottom of the list)
    • Up to (but not necessarily) 1000 images can be submitted for each topic.
    • Each topic must have at least one image.
    • Each run must be submitted in a single file. Files should be pure text files and not be zipped or otherwise compressed.

    Modality classification

    The format of the result submission for the modality classification subtask should be the following:

    1471-2091-8-12-2 3D 0.9
    1471-2091-8-29-7 GX 1
    1471-2105-10-276-8 3D 0.4
    1471-2105-10-379-3 GR 0.8
    1471-2105-10-S1-S60-3 HX 0.9
    ...

    where:

    • The first column contains the Image-ID (IRI). This ID does not contain the file format ending and it should not represent a file path
    • The second column is the classcode
    • The third column represents the normalized score (between 0 and 1) that your system assigned to that specific result

    You should also respect the following constraints

    • Each specified image must be part of the collection (dataset)
    • An Image cannot be contained more than once
    • At least all images of the testset must be contained in runfile, however it would be nice to have the whole dataset classified
    • Only known classcodes are accepted

    Results

    Modality classification

    mixed Correctly classified in %
    XRCE_all_MIX_semiLM.txt 0.8691
    XRCE_Testset_MIX_semiL50.txt 0.8642
    2011.06.10-02.38.40.test.prediction.trec 0.8603
    XRCE_Testset_MIX_semiL25.txt 0.8593
    2011.06.09-18.36.25.test.prediction.trec 0.8564
    2011.06.08-19.58.41.test.prediction.trec 0.8515
    2011.06.10-00.01.26.test.prediction.trec 0.7685
    image_text_test_result_multilevel.dat 0.7412
    2011.06.10-03.25.40.test.prediction.trec 0.7412
    image_text_test_result_sum_ext.dat 0.6025
    image_text_test_result_CV.dat 0.5966
    image_text_test_result_multilevel_ext.dat 0.5917
    image_text_test_result_sum.dat 0.5917
    image_text_test_result_CV_ext.dat 0.582
    image_text_test_result_original.dat 0.582
    image_text_test_result_ext.dat 0.5439
    textual
    IPL_AUEB_ICLEF2011_MED_MODALITY_09062011_1500.txt 0.7041
    IPL_AUEB_ICLEF2011_MED_MODALITY_09062011_1600.txt 0.4765
    visual
    XRCE_all_VIS_semiL25.txt 0.8359
    XRCE_Testset_VIS_semi20_CBIR.txt 0.8349
    XRCE_all_VIS_semi20_CBIR.txt 0.8339
    recod_imageclefmed_ModCla_357l 0.6972
    recod_imageclefmed_ModCla_Vl 0.6943
    recod_imageclefmed_ModCla_VlNoR 0.6904
    recod_imageclefmed_ModCla_VsNoR 0.6835
    recod_imageclefmed_ModCla_Vs 0.6806
    recod_imageclefmed_ModCla_343s 0.6787
    recod_imageclefmed_ModCla_370l 0.6787
    recod_imageclefmed_ModCla_370s 0.6767
    recod_imageclefmed_ModCla_343l 0.6748
    recod_imageclefmed_ModCla_357s 0.6669
    classificationResults_GIFT.txt 0.622
    image_test_result_original.dat 0.5712
    image_test_result_ext.dat 0.4853

    Ad-hoc image-based retrieval

    runid retrieval_type run-type MAP P10 P20 Rprec bpref num_rel_ret
    Mixed
    mixed_3_2_cedd_baseline_run Mixed Not applicable 0.2372 0.3933 0.3550 0.2881 0.2738 1597
    mixed_cedd_baseline_run Mixed Not applicable 0.2307 0.3967 0.3400 0.2706 0.2606 1595
    mixed_3_2_cedd_weighted_run Mixed Not applicable 0.2014 0.3400 0.3233 0.2587 0.2481 1455
    mixed_3_2_cedd_rerank_reindex_run Mixed Feedback 0.1983 0.4067 0.3350 0.2397 0.2428 1349
    mixed_cedd_weighted_run Mixed Not applicable 0.1972 0.3367 0.3083 0.2489 0.2383 1443
    mixed_cedd_rerank_reindex_run Mixed Not applicable 0.1853 0.3667 0.3283 0.2309 0.2230 1338
    DEMIR_MED2011 Mixed Automatic 0.1645 0.3967 0.3350 0.2340 0.2198 890
    XRCE_RUN_MIX_SFLMODSc_ax_dir_spl Mixed Feedback 0.1643 0.3800 0.3183 0.2130 0.2234 1391
    XRCE_RUN_MIX_SFLMOD_ax_dir_spl Mixed Feedback 0.1545 0.3800 0.3383 0.1860 0.2053 1284
    XRCE_RUN_MIX_SFLMODFL2_ax_dir_spl Mixed Automatic 0.1520 0.3633 0.3250 0.1909 0.2049 1288
    XRCE_RUN_MIX_SFL_MOD_ax_dir_spl_lgd Mixed Feedback 0.1512 0.3667 0.3317 0.1821 0.2031 1284
    IPL2011Mixed-DEF-T1-C6-M0_2-BM25F-0_39-0_01 Mixed Automatic 0.1494 0.3067 0.2583 0.1882 0.1849 1241
    IPL2011Mixed-DEF-T1-C6-M0_2-R0_01-BM25F-0_39-0_01 Mixed Automatic 0.1493 0.3067 0.2550 0.1883 0.1849 1241
    IPL2011Mixed-DTG-T1-C6-M0_2-R0_01-BM25F-0_39-0_01 Mixed Automatic 0.1492 0.3067 0.2550 0.1883 0.1849 1240
    IPL2011Mixed-DTG-T1-C6-M0_2-BM25F-0_39-0_01 Mixed Automatic 0.1489 0.3067 0.2583 0.1882 0.1840 1239
    XRCE_RUN_MIX_SFL_noMOD_ax_dir_spl Mixed Automatic 0.1472 0.3433 0.2867 0.1761 0.1874 1203
    XRCE_RUN_MIX_SFL_noMOD_ax_dir_spl_lgd Mixed Feedback 0.1429 0.3367 0.2867 0.1738 0.1860 1215
    iti-lucene-baseline+expanded-concepts+image Mixed Automatic 0.1356 0.2833 0.2517 0.1870 0.1970 1350
    Run6_TxtImg_OwaOr03 Mixed Automatic 0.1346 0.3467 0.2867 0.1666 0.1604 565
    Run6_TxtImg_PtPi Mixed Automatic 0.1311 0.3333 0.2817 0.1651 0.1557 565
    Run5_TxtImg_OwaOr03 Mixed Automatic 0.1299 0.3200 0.2567 0.1613 0.1641 710
    mixed_GIFT_Lucene_captions_ib Mixed Automatic 0.1230 0.3133 0.2733 0.1724 0.1733 948
    Run5_TxtImg_PtPi Mixed Feedback 0.1176 0.2800 0.2100 0.1575 0.1614 705
    ILP2011Mixed-DEF-T1-C6-M0_2-BM25F Mixed Automatic 0.0952 0.2967 0.2667 0.1332 0.1610 1209
    ILP2011Mixed-DTG-T1-C6-M0_2-BM25F Mixed Automatic 0.0945 0.2700 0.2633 0.1314 0.1613 1232
    ILP2011Mixed-DTG-T0_113-C0_335-M0_1-BM25F Mixed Automatic 0.0924 0.2733 0.2650 0.1299 0.1600 1209
    ILP2011Mixed-DEF-T0_113-C0_335-M0_1-BM25F Mixed Automatic 0.0911 0.2733 0.2617 0.1338 0.1583 1186
    Multimodal_Rerank_Filter_Merge Mixed Automatic 0.0910 0.2867 0.2717 0.1270 0.1572 1130
    Multimodal_Rerank_Merge Mixed Automatic 0.0903 0.2833 0.2700 0.1198 0.1547 1145
    Run6_TxtImg_Pi Mixed Automatic 0.0891 0.2400 0.1933 0.1206 0.1288 508
    mixed_GIFT_Lucene_full_ib Mixed Automatic 0.0857 0.2900 0.2700 0.1300 0.1308 830
    iti-essie-baseline+expanded-concepts+image Mixed Automatic 0.0843 0.2167 0.2017 0.1295 0.1331 948
    Run5_TxtImg_Pi Mixed Automatic 0.0699 0.1667 0.1517 0.1017 0.1394 755
    AUDR_MIXED_CEDD_TFIDFModel Mixed Automatic 0.0556 0.1767 0.1550 0.0905 0.1018 553
    AUDR_MIXED_CEDD_KLD_QE Mixed Automatic 0.0341 0.1633 0.1217 0.0605 0.0720 496
    AUDR_MIXED_CEDD_BM25Model Mixed Automatic 0.0328 0.1500 0.1217 0.0632 0.0719 496
    Daedalus_CombSemA#C_MC#SC_MCCI Mixed Automatic 0.0232 0.1133 0.0983 0.0516 0.0851 862
    Daedalus_CombSemE#C_MC#C_MCCI Mixed Automatic 0.0218 0.1167 0.0933 0.0514 0.0858 905
    Daedalus_CombSemA#C_MC#C_MCCI Mixed Automatic 0.0211 0.1167 0.0933 0.0514 0.0845 862
    Daedalus_CombBas#C_MC#C_MC Mixed Automatic 0.0204 0.1167 0.0917 0.0504 0.0827 829
    Textual
    laberinto_CTC Textual Automatic 0.2172 0.3467 0.3017 0.2369 0.2402 1471
    Run2_Txt Textual Automatic 0.2158 0.3533 0.3383 0.2470 0.2514 1383
    IPL2011AdHocT1-C6-M0_2-R0_01-DEFAULT Textual Automatic 0.2145 0.4033 0.3333 0.2536 0.2434 1451
    laberinto_BC Textual Automatic 0.2133 0.3400 0.3067 0.2363 0.2384 1469
    IPL2011AdHocT1-C6-M0_2-DEFAULT Textual Automatic 0.2130 0.3567 0.3167 0.2392 0.2370 1433
    Run3_Txt Textual Automatic 0.2125 0.3867 0.3317 0.2468 0.2430 1138
    IPL2011AdHocT0_113-C0_335-M0_1-DEFAULT Textual Automatic 0.2016 0.3733 0.3183 0.2307 0.2269 1385
    IVSCT5G Textual Automatic 0.2008 0.3033 0.3050 0.2328 0.2331 1544
    IVSCT5GK Textual Automatic 0.2008 0.3033 0.3050 0.2331 0.2331 1543
    IVPCT5GKin Textual Automatic 0.1975 0.2967 0.2833 0.2328 0.2257 1517
    IVPCT5G Textual Automatic 0.1974 0.2967 0.2833 0.2322 0.2256 1520
    IVPCT5GKout Textual Automatic 0.1973 0.2967 0.2833 0.2322 0.2256 1519
    Daedalus_BasTxtC Textual Automatic 0.1966 0.3900 0.3667 0.2668 0.2564 866
    IPL2011AdHocTC0_9-M0_1-DEFAULT Textual Manual 0.1945 0.3700 0.3100 0.2227 0.2255 1380
    DEMIR_MED_1 Textual Automatic 0.1942 0.3400 0.2933 0.2242 0.2215 1444
    laberinto_ETPCC Textual Automatic 0.1939 0.2933 0.2617 0.2089 0.2198 1526
    AUDR_TFIDFModel_TopicModel_IMAGE_CAPTION_AND_ARTICLE Textual Automatic 0.1917 0.3400 0.3050 0.2322 0.2237 1447
    Daedalus_SemEC Textual Automatic 0.1906 0.3867 0.3783 0.2868 0.2690 1091
    SINAI-ImgCaption Textual Automatic 0.1890 0.3300 0.2950 0.2322 0.2247 1406
    SINAI-ImgCaptionExpand1 Textual Automatic 0.1890 0.3300 0.2950 0.2322 0.2247 1406
    SINAI-ImgCaptionExpand2 Textual Automatic 0.1890 0.3300 0.2950 0.2322 0.2247 1406
    SINAI-ImgCaptionExpand3 Textual Automatic 0.1890 0.3300 0.2950 0.2322 0.2247 1406
    SINAI-ImgCaptionExpand4 Textual Automatic 0.1890 0.3300 0.2950 0.2322 0.2247 1406
    SINAI-ImgCaptionExpand5 Textual Automatic 0.1890 0.3300 0.2950 0.2322 0.2247 1406
    XRCE_RUN_TXTax_dir_spl Textual Feedback 0.1870 0.3233 0.2583 0.2152 0.2156 1203
    Daedalus_SemAC Textual Automatic 0.1818 0.3767 0.3583 0.2637 0.2496 935
    XRCE_RUN_TXT_noMOD Textual Automatic 0.1802 0.3100 0.2533 0.2192 0.2122 1215
    AUDR_TFIDF_CAPTION Textual Automatic 0.1758 0.3133 0.2700 0.2069 0.2187 1206
    HES-SO-VS_IMAGE-BASED_CAPTIONS Textual Automatic 0.1742 0.3000 0.2683 0.2096 0.2179 1261
    UESTC_adhoc_p1 Textual Automatic 0.1672 0.2667 0.2383 0.2015 0.1946 1373
    UESTC_adhoc_p2 Textual Feedback 0.1672 0.2733 0.2333 0.2120 0.1995 1348
    UESTC_adhoc_p1QE_sw Textual Automatic 0.1669 0.2833 0.2333 0.2098 0.1977 1384
    UESTC_adhoc_p2QE_sw_chd Textual Automatic 0.1666 0.2700 0.2250 0.2136 0.2049 1362
    UESTC_adhoc_p1_sw Textual Automatic 0.1635 0.2733 0.2433 0.1995 0.1908 1368
    UESTC_adhoc_p1QE Textual Automatic 0.1632 0.2533 0.2283 0.2084 0.1994 1385
    IPL2011AdHocTCM-DEFAULT-DEFAULT Textual Automatic 0.1599 0.3367 0.2850 0.1799 0.1874 1336
    ESU_Ib_bl Textual Automatic 0.1594 0.2667 0.2067 0.1937 0.1889 1268
    UESTC_adhoc_p2QE_sw Textual Automatic 0.1590 0.2567 0.2183 0.2006 0.1956 1286
    UESTC_adhoc_indri Textual Automatic 0.1588 0.2600 0.2333 0.1944 0.1873 1377
    UESTC_adhoc_p2QE Textual Automatic 0.1583 0.2500 0.2133 0.2044 0.1974 1350
    ESU_Ib_blRF Textual Automatic 0.1558 0.2433 0.2100 0.1763 0.1865 1250
    ESU_Ib_Struc Textual Automatic 0.1540 0.2800 0.2300 0.1869 0.1906 1333
    IPL2011AdHocTC0_9-M0_1-BM25F Textual Feedback 0.1510 0.3033 0.2633 0.1971 0.1909 1245
    laberinto_BIR Textual Automatic 0.1496 0.3400 0.3000 0.1908 0.1992 1292
    IPL2011AdHocT1-C6-M0_2-R0_01-BM25F Textual Automatic 0.1492 0.3067 0.2550 0.1883 0.1848 1240
    IPL2011AdHocT1-C6-M0_2-BM25F Textual Automatic 0.1485 0.3067 0.2583 0.1882 0.1839 1233
    UESTC_adhoc_p1QE_sw_chd Textual Automatic 0.1471 0.2100 0.2050 0.1799 0.1807 1346
    laberinto_CTIR Textual Automatic 0.1466 0.3433 0.2950 0.1868 0.1953 1293
    textual_rerank_reindex Textual Automatic 0.1452 0.3033 0.2633 0.1683 0.1859 1288
    laberinto_ETPCIR Textual Automatic 0.1411 0.3000 0.2850 0.1766 0.1887 1325
    ESU_Ib_StrucRF Textual Automatic 0.1346 0.2300 0.2000 0.1609 0.1874 1307
    IPL2011AdHocT0_113-C0_335-M0_1-BM25F Textual Automatic 0.1312 0.2767 0.2433 0.1665 0.1670 1210
    Run6_Txt Textual Automatic 0.1309 0.3433 0.2733 0.1652 0.1597 564
    IPL2011AdHocTCM-BM25 Textual Automatic 0.1289 0.2867 0.2500 0.1710 0.1744 1168
    Run5_Txt Textual Automatic 0.1270 0.3100 0.2517 0.1565 0.1622 651
    iti-lucene-baseline+expanded-concepts Textual Automatic 0.1255 0.2733 0.2483 0.1691 0.1828 1235
    laberinto_BFT Textual Automatic 0.1146 0.2533 0.2267 0.1621 0.1786 1355
    laberinto_CTFT Textual Automatic 0.1101 0.2500 0.2333 0.1512 0.1691 1348
    laberinto_ETFT Textual Automatic 0.1050 0.2567 0.2250 0.1302 0.1640 1292
    laberinto_ETPCFT Textual Automatic 0.1014 0.2400 0.2200 0.1253 0.1571 1310
    iti-essie-baseline+expanded-concepts Textual Automatic 0.0966 0.2133 0.2000 0.1421 0.1556 1148
    HES-SO-VS_IMAGE-BASED_FULLTEXT Textual Automatic 0.0921 0.2167 0.2150 0.1264 0.1506 1211
    AUDR_BM25Model_TopicModel_IMAGE_CAPTION_AND_ARTICLE Textual Automatic 0.0878 0.1900 0.1650 0.1162 0.1250 690
    AUDR_KLDQueryExpansion_TopicModel_IMAGE_CAPTION_AND_ style='display:none'>ARTICLE Textual Automatic 0.0811 0.1733 0.1700 0.1165 0.1191 694
    Visual
    IPL2011Visual-DECFc Visual Automatic 0.0338 0.1500 0.1317 0.0625 0.0717 717
    IPL2011Visual-DEFC Visual Automatic 0.0322 0.1467 0.1250 0.0640 0.0715 689
    IPL2011Visual-DEC Visual Automatic 0.0312 0.1433 0.1233 0.0616 0.0716 673
    ILP2011Visual-DEF Visual Feedback 0.0283 0.1367 0.1217 0.0583 0.0703 632
    gift_visual_ib Visual Automatic 0.0274 0.1467 0.1367 0.0581 0.0807 731
    ILP2011Visual-DTG Visual Automatic 0.0253 0.1333 0.1250 0.0538 0.0715 590
    _visual_ib Visual Automatic 0.0252 0.1267 0.1200 0.0554 0.0752 709
    iti-lucene-image Visual Automatic 0.0245 0.1333 0.1217 0.0492 0.0627 625
    image_fusion_category_weight_filter Visual Automatic 0.0221 0.1167 0.1017 0.0457 0.0651 617
    image_fusion_category_weight_filter_merge Visual Automatic 0.0201 0.1000 0.0933 0.0433 0.0629 671
    image_fusion_category_weight_merge Visual Automatic 0.0193 0.0933 0.0867 0.0431 0.0620 688
    cedd_norm_min_1 Visual Automatic 0.0174 0.1067 0.0833 0.0434 0.0602 569
    Daedalus_BasImgC_MC Visual Feedback 0.0147 0.0967 0.0800 0.0471 0.0582 507
    Daedalus_ImgC_MCCI Visual Automatic 0.0147 0.0967 0.0800 0.0471 0.0582 507
    bovw_visual_ib Visual Automatic 0.0126 0.0867 0.0800 0.0315 0.0437 324
    Daedalus_BasImg Visual Automatic 0.0125 0.0733 0.0683 0.0397 0.0528 484
    AUDR_VISUAL_CEDD Visual Automatic 0.0082 0.0400 0.0500 0.0269 0.0427 466
    bovw_s2_visual_ib Visual Automatic 0.0076 0.0900 0.0650 0.0182 0.0279 213
    okada_lab_cosine_tfidf Visual Automatic 0.0007 0.0167 0.0117 0.0032 0.0084 81
    okada_lab_cosine_reg Visual Automatic 0.0005 0.0067 0.0133 0.0037 0.0094 86
    okada_lab_emd_L1_tfidf Visual Automatic 0.0004 0.0100 0.0083 0.0022 0.0036 36
    okada_lab_diffusion Visual Automatic 0.0003 0.0000 0.0067 0.0012 0.0065 72
    okada_lab_emd Visual Automatic 0.0003 0.0100 0.0050 0.0022 0.0051 42
    okada_lab_emd_L1 Visual Automatic 0.0003 0.0100 0.0050 0.0022 0.0051 42
    okada_lab_emd_tfidf Visual Automatic 0.0003 0.0067 0.0067 0.0021 0.0042 39
    okada_lab_diffusion_tfidf Visual Automatic 0.0002 0.0000 0.0067 0.0012 0.0063 66

    Case-based retrieval

    The results shown in this table are the official results calculate using 9 topics as initially the judgements of one topic were forgotten. The correct qrels files with all ten topics are available to participants for further analysis. The ranking remains basically the same with two ranks swapping in the top 10 ranks and a few more in lower ranks.

    runid retrieval type run type MAP P10 P20 Rprec bpref num_rel_ret
    Mixed
    mixed_GIFT_Lucene_fulltext_cb Mixed Automatic 0.0754 0.1667 0.1556 0.1227 0.0958 121
    iti-lucene-baseline+expanded-concepts+image Mixed Automatic 0.0269 0.0333 0.0222 0.0387 0.0252 99
    iti-lucene-baseline+expanded-concepts+image+cases Mixed Automatic 0.0255 0.0333 0.0278 0.0364 0.0230 99
    iti-lucene-expanded-concepts+image Mixed Automatic 0.0247 0.0333 0.0222 0.0387 0.0249 95
    Textual
    UESTC_full_indri Textual Automatic 0.1297 0.1889 0.1500 0.1588 0.1212 144
    HES-SO-VS_CASE_BASED_FULLTEXT Textual Automatic 0.1293 0.2000 0.1444 0.1509 0.1122 141
    UESTC_full_p2QE Textual Automatic 0.1199 0.1556 0.1444 0.1365 0.1082 143
    UESTC_full_p2 Textual Automatic 0.1179 0.1889 0.1444 0.1490 0.1162 145
    MRIM_KJ_A_VM_Sop_T4G Textual Automatic 0.1114 0.1444 0.1444 0.1546 0.1064 149
    IRIT_LGDc1.0_KLbfree_d_20_t_20_1 Textual Automatic 0.1030 0.1556 0.1278 0.1206 0.0930 117
    IRIT_CombSUMc1.0_KLbfree_d_20_t_20_1 Textual Automatic 0.0947 0.1333 0.1333 0.1073 0.0862 119
    iti-essie-manual Textual Manual 0.0941 0.1667 0.1556 0.1409 0.1162 80
    IRIT_LGDc1.0_KLbfree_d_20_t_20_1_ignore_low_idf Textual Automatic 0.0937 0.1111 0.0889 0.1017 0.0716 118
    MRIM_KJ_A_VM_Pos_T4G Textual Automatic 0.0911 0.1111 0.1278 0.1454 0.0938 146
    UESTC_full_okapi Textual Automatic 0.0907 0.1444 0.1333 0.1397 0.0970 148
    IRIT_CombSUMc1.0_KLbfree_d_20_t_20_2 Textual Automatic 0.0874 0.1111 0.1000 0.1149 0.0710 118
    IRIT_LGDc1.0 Textual Automatic 0.0872 0.1111 0.1222 0.0919 0.0722 111
    IRIT_CombSUMc1.0_3 Textual Automatic 0.0859 0.1444 0.1000 0.1044 0.0783 117
    UESTC_ac_okapi Textual Automatic 0.0835 0.1222 0.1000 0.0942 0.0734 128
    IRIT_In_expB2c1.0_KLbfree_d_20_t_20_0_ignore_low_idf Textual Automatic 0.0793 0.1444 0.0889 0.0893 0.0707 114
    IRIT_In_expB2c1.0_KLbfree_d_20_t_20_0 Textual Automatic 0.0772 0.1000 0.1000 0.0896 0.0675 116
    UESTC_ac_indri Textual Automatic 0.0767 0.1111 0.1000 0.0864 0.0669 127
    iti-lucene-baseline Textual Automatic 0.0762 0.1444 0.1000 0.0877 0.0737 112
    UESTC_full_okapi_fb Textual Automatic 0.0762 0.1333 0.1222 0.1281 0.0841 113
    IRIT_In_expB2c1.0_1 Textual Automatic 0.0743 0.1111 0.1000 0.1075 0.0730 117
    UESTC_ac_p2 Textual Automatic 0.0722 0.1222 0.1056 0.1011 0.0628 128
    IRIT_CombSUMc1.0_2_ignore_low_idf Textual Automatic 0.0721 0.1333 0.0778 0.0868 0.0683 113
    UESTC_ac_p2QE Textual Automatic 0.0677 0.1000 0.1167 0.1011 0.0633 127
    UESTC_ac_okapi_fb Textual Automatic 0.0500 0.0778 0.0667 0.0703 0.0484 103
    IPL2011CaseBasedT1-C6-M0_2-RO_01-BM25F-AVG Textual Automatic 0.0463 0.0889 0.0889 0.0795 0.0588 100
    IPL2011CaseBasedT1-C6-M0_2-BM25F-AVG Textual Automatic 0.0461 0.0889 0.0833 0.0795 0.0588 100
    HES-SO-VS_CASE_BASED_CAPTIONS Textual Automatic 0.0437 0.1111 0.0833 0.0816 0.0540 90
    iti-lucene-baseline+expanded-concepts Textual Automatic 0.0264 0.0333 0.0222 0.0387 0.0252 94
    iti-lucene-baseline+expanded-concepts+cases Textual Automatic 0.0249 0.0333 0.0278 0.0364 0.0230 94
    iti-lucene-expanded-concepts Textual Automatic 0.0243 0.0333 0.0222 0.0387 0.0249 94
    IPL2011CaseBasedT1-C6-M0_2-BM25F-SUM Textual Automatic 0.0201 0.0333 0.0333 0.0296 0.0176 102
    IPL2011CaseBasedT1-C6-M0_2-RO_01-BM25F-SUM Textual Automatic 0.0201 0.0333 0.0333 0.0296 0.0174 102
    iti-essie-frames Textual Automatic 0.0174 0.0667 0.0500 0.0450 0.0333 61
    iti-lucene-frames Textual Automatic 0.0141 0.0667 0.0444 0.0324 0.0239 60
    Visual
    gift_visual Visual Automatic 0.0204 0.0444 0.0333 0.0336 0.0292 45
    bovw_visual_cb Visual Automatic 0.0164 0.0556 0.0333 0.0308 0.0267 38
    _visual_ib Visual Automatic 0.0150 0.0444 0.0333 0.0306 0.0228 55
    bovw_s2_visual_cb Visual Automatic 0.0082 0.0333 0.0222 0.0119 0.0113 35

    Contact

    Please contact Jayashree Kalpathy-Cramer or Henning Müller for any questions about this task.