You are here

AMIA: Medical task

Primary tabs

Changes in 2013!

The ImageCLEF medical task will for the first time organized a workshop outside of Europe; the ImageCLEF meeting is planned at the annual AMIA meeting in the form of a workshop. Posters are still welcome at the CLEF meeting in Valencia.

Citations

  • When referring to ImageCLEFmed 2013 task general goals, general results, etc. please cite the following publication:
    • Alba García Seco de Herrera, Jayashree Kalpathy-Cramer, Dina Demner-Fushman, Sameer Antani and Henning Müller, Overview of the ImageCLEF 2013 medical tasks, in: CLEF working notes 2013, Valencia, Spain, 2013
    • BibText:

      @InProceedings{GKD2013,
        Title = {Overview of the {ImageCLEF} 2013 medical tasks},
        Author = {Garc\'ia Seco de Herrera,Alba and Kalpathy--Cramer, Jayashree and Demner Fushman, Dina and Antani, Sameer and M\"uller, Henning},
        Booktitle = {Working Notes of {CLEF} 2013 (Cross Language Evaluation Forum)},
        Year = {2013},
        Month = {September},
        Location = {Valencia, Spain}

      }

  • When referring to ImageCLEFmed task in general, please cite the following publication:
    • Jayashree Kalpathy-Cramer, Alba García Seco de Herrera, Dina Demner-Fushman, Sameer Antani, Steven Bedrick and Henning Müller, Evaluating Performance of Biomedical Image Retrieval Systems –an Overview of the Medical Image Retrieval task at ImageCLEF 2004-2014 (2014), in: Computerized Medical Imaging and Graphics
    • BibText:

      @Article{KGD2014,
        Title = {Evaluating Performance of Biomedical Image Retrieval Systems-- an Overview of the Medical Image Retrieval task at {ImageCLEF} 2004--2014},
        Author = {Kalpathy--Cramer, Jayashree and Garc\'ia Seco de Herrera, Alba and Demner--Fushman, Dina and Antani, Sameer and Bedrick, Steven and M\"uller, Henning},
        Journal = {Computerized Medical Imaging and Graphics},
        Year = {2014}

      }

News

  • 05.07.2013 Details of the special issue and AMIA weorkshop announced.
  • 26.02.2013 Test data for the compound figure separation and modality classification are available.
  • 25.02.2013 All participants in the medical task can also submit a paper to the CLEF working notes.
  • 15.02.2013 Training data for the compound figure separation are available.
  • 5.02.2013 A special issue will be organized in Computerized Medical Imaging and Graphics on ImageCLEF 2013.
  • 5.02.2013 Training data for the modality classification task has been released.
  • 1.02.2013 The database has been released and also topics for the image-based retrieval.
  • 15.01.2013 The venue of the medical task in 2013 is decided to be the annual AMIA meeting in the US, more details to follow.
  • 12.12.2012 Registration has opened for the medical tasks.

Schedule:

  • 12.12.2012: registration opens for all ImageCLEF tasks
  • 1.2.2013: data release
  • 15.2.2013: training data release for the modality classification and compound figure separation tasks
  • 15.2.2013: topic release for the retrieval tasks
  • 1.4.2013: submission of runs for the retrieval tasks
  • 15.4.2013: submission of runs for the modality classification and compound figure separation tasks
  • 1.5.2013: release of results
  • 30.11.2013: submission to the special issue on ImageCLEFmed
  • 16.11.2013: AMIA 2013 conference Washington DC, USA

The medical retrieval task of ImageCLEF 2013 uses the same subset of PubMed Central containing 305,000 images that was used in 2012.
This task is a use case of the Promise network of excellence and supported by the project.

There will be four types of tasks in 2013:

  • Modality Classification:
    Previous studies have shown that imaging modality is an important aspect of the image for medical retrieval. In user-studies, clinicians have indicated that modality is one of the most important filters that they would like to be able to limit their search by. Many image retrieval websites (Goldminer, Yottalook) allow users to limit the search results to a particular modality. However, this modality is typically extracted from the caption and is often not correct or present. Studies have shown that the modality can be extracted from the image itself using visual features. Additionally, using the modality classification, the search results can be improved significantly. In 2013, a larger number of compound figures will be present making the task significantly harder but corresponding much more to the reality of biomedical journals.
  • Compound figure separation:
    As up to 40% of the figures in PubMed Central are compound figures, a major step in making the content of the compound figures accessible is the detection of compound figures and then their separation into sub figures that can subsequently be classified into modalities and made available for research.
    The task will make available training data with separation labels of the figures, and then a test data set where the labels will be made available after the submission of the results.
  • Ad-hoc image-based retrieval :
    This is the classic medical retrieval task, similar to those in organized in 2005-2012. Participants will be given a set of 30 textual queries with 2-3 sample images for each query. The queries will be classified into textual, mixed and semantic, based on the methods that are expected to yield the best results.
  • Case-based retrieval:
    This task was first introduced in 2009. This is a more complex task, but one that we believe is closer to the clinical workflow. In this task, a case description, with patient demographics, limited symptoms and test results including imaging studies, is provided (but not the final diagnosis). The goal is to retrieve cases including images that might best suit the provided case description. Unlike the ad-hoc task, the unit of retrieval here is a case, not an image. For the purposes of this task, a "case" is a PubMed ID corresponding to the journal article. In the results submissions the article DOI should be used as several articles do not have PubMed IDs nor Article URLs.

Modality classification

The following hierarchy will be used for the modality classification, different form the classes in ImageCLEF 2011 but the same as in 2012.

Class codes with descriptions (class codes need to be specified in run files):
([Class code] Description)

  • [COMP] Compound or multipane images (1 category)
  • [Dxxx] Diagnostic images:
    • [DRxx] Radiology (7 categories):
      • [DRUS] Ultrasound
      • [DRMR] Magnetic Resonance
      • [DRCT] Computerized Tomography
      • [DRXR] X-Ray, 2D Radiography
      • [DRAN] Angiography
      • [DRPE] PET
      • [DRCO] Combined modalities in one image
    • [DVxx] Visible light photography (3 categories):
      • [DVDM] Dermatology, skin
      • [DVEN] Endoscopy
      • [DVOR] Other organs
    • [DSxx] Printed signals, waves (3 categories):
      • [DSEE] Electroencephalography
      • [DSEC] Electrocardiography
      • [DSEM] Electromyography
    • [DMxx] Microscopy (4 categories):
      • [DMLI] Light microscopy
      • [DMEL] Electron microscopy
      • [DMTR] Transmission microscopy
      • [DMFL] Fluorescence microscopy
    • [D3DR] 3D reconstructions (1 category)
  • [Gxxx] Generic biomedical illustrations (12 categories):
    • [GTAB] Tables and forms
    • [GPLI] Program listing
    • [GFIG] Statistical figures, graphs, charts
    • [GSCR] Screenshots
    • [GFLO] Flowcharts
    • [GSYS] System overviews
    • [GGEN] Gene sequence
    • [GGEL] Chromatography, Gel
    • [GCHE] Chemical structure
    • [GMAT] Mathematics, formulae
    • [GNCP] Non-clinical photos
    • [GHDR] Hand-drawn sketches

Data Download

Our database distribution includes an XML file and a compressed file containing the over 300,000 images of 75'000 articles of the biomedical open access literature.
The login/password for accessing the data is not your personal login/password for the registration system.
In the registration system under collections, details, you can find all information on accessing the data.

Topics

We will provide 30 ad-hoc topics, divided into visual, mixed and semantic topic types.
We will also provide 30 case-based topics, where the retrieval unit is a case, not an image.

Data Submission

Image-based and case-based retrieval

Please ensure that your submissions are compliant with the trec_eval format prior to submission. We will reject any runs that do not meet the required format.
Also, please note that each group is allowed a maximum of 10 runs for image-based and case-based topics each.
The qrels will be distributed among the participants, so further runs can be evaluated for the working notes papers by the participants.
Do not hesitate to ask if you have questions regarding the trec_eval format.

At the time of submission, the following information about each run will be requested. Please let us know if you would like clarifications on how to classify your runs.

1. What was used for the retrieval: Image, text or mixed (both)
2. Was other training data used?
3. Run type: Automatic, Manual, Interactive
4. Query Language

trec_eval format

The format for submitting results is based on the trec_eval program (http://trec.nist.gov/trec_eval/) as follows:

1 1 27431 1 0.567162 OHSU_text_1
1 1 27982 2 0.441542 OHSU_text_1
.............
1 1 52112 1000 0.045022 OHSU_text_1
2 1 43458 1 0.9475 OHSU_text_1
.............
25 1 28937 995 0.01492 OHSU_text_1

where:

  • The first column contains the topic number.
  • The second column is always 1.
  • The third column is the image identifier (IRI) without the extension jpg and without any image path (or the full article DOI for the case-based topics).
  • The fourth column is the ranking for the topic (1-1000).
  • The fifth column is the score assigned by the system.
  • The sixth column is the identifier for the run and should be the same in the entire file.

Several key points for submitted runs are:

  • The topic numbers should be consecutive and complete.
  • Case-based and image-based topics have to be submitted in separate files.
  • The score should be in decreasing order (i.e. the image at the top of the list should have a higher score than images at the bottom of the list).
  • Up to (but not necessarily) 1000 images can be submitted for each topic.
  • Each topic must have at least one image.
  • Each run must be submitted in a single file. Files should be pure text files and not be zipped or otherwise compressed.

Modality classification

The format of the result submission for the modality classification subtask should be the following:

1471-2091-8-12-2 DRUS 0.9
1471-2091-8-29-7 GTAB 1
1471-2105-10-276-8 DMLI 0.4
1471-2105-10-379-3 D3DR 0.8
1471-2105-10-S1-S60-3 COMP 0.9
...

where:

  • The first column contains the Image-ID (IRI). This ID does not contain the file format ending and it should not represent a file path.
  • The second column is the classcode.
  • The third column represents the normalized score (between 0 and 1) that your system assigned to that specific result.

You should also respect the following constraints:

  • Each specified image must be part of the collection (dataset).
  • An Image cannot be contained more than once.
  • At least all images of the testset must be contained in runfile, however it would be nice to have the whole dataset classified.
  • Only known classcodes are accepted.

Please note that each group is allowed a maximum of 10 runs.

Compound figure separation

The format of the result submission for the compound figure separation subtask should be an XML file with the following structure :

<?xml version="1.0" encoding="UTF-8"?>
<annotations>
   <annotation>
      <filename>1423-0127-16-105-5.jpg</filename>
      <object>
         <point x="0" y="1" />
         <point x="299" y="1" />
         <point x="0" y="242" />
         <point x="299" y="242" />
      </object>
      <object>
         <point x="300" y="1" />
         <point x="600" y="1" />
         <point x="300" y="242" />
         <point x="600" y="242" />
      </object>
      ...
   </annotation>
   <annotation>
      <filename>1423-0127-16-22-1.jpg</filename>
      ...
   </annotation>
   ...
</annotations>
where:
  • The root element is <annotations>.
  • The root contains one <annotation> element per image. Each one of these elements must contain :
    • A <filename> element with the name of the compound image (including the file extension)
    • One or more <object> elements that define the bounding box of each subfigure in the image. Each <object> must contain :
      • 4 <point> elements that define the 4 corners of the bounding box. The <point> elements must have two attributes (x and y), which correspond to the horizontal and vertical pixel position, respectively. The preferred order of the points is :
        1. top-left
        2. top-right
        3. bottom-left
        4. bottom-right
You should also respect the following constraints:
  • Each specified image must be part of the collection (dataset).
  • An Image cannot appear more than once in a single XML results file.
  • All the images of the testset must be contained in the runfile.
  • The resulting XML file MUST validate against the XSD schema that will be provided.

Evaluation

Compound figure separation

This section provides an overview of the evaluation method used for the compound figure separation subtask. Consider the following figure, which is separated into 3 subfigures :
The basic idea of the evaluation for a figure is the following :
  • For each subfigure in the ground truth, the application will look for the best matching subfigure in the submitted run data (which will be called "candidate" from now on)
  • Each subfigure is worth 1 point. If all the subfigures in the ground truth are successfully mapped to one corresponding subfigure in the candidate (and no extra subfigures are present), the result will be 3 points, which normalized to the number of figures, yields a score of 1.0
Example of a perfect scoring candidate for the base figure above :
As you can see, there is exactly one candidate subfigure for every subfigure defined in the ground truth. This leads us to the next set of rules :
  • The separators don't always match the real size of the subfigures (there may be blank space around them). Therefore, the main metric used for evaluation is the overlap between a candidate subfigure and the ground truth. Overlap ratio is measured with respect to the candidate.
  • To be considered a valid match, the overlap between a candidate subfigure and a subfigure from the ground truth must be at least 66%
  • Only one candidate subfigure can be assigned to each of the subfigures from the ground truth. The subfigure with the biggest overlap will be considered in case of multiple possibilities
Here is another example :
In this candidate figure, only one of the 3 subfigures was identified correctly. Indeed, the red subfigure candidate doesn't have an overlap of >=66% with either of the other two subfigures, so it remains an "orphan".

Here is another example that will illustrate the last rule :
This last figure shows what happens when extra subfigures are detected in the candidate :
  • The maximum score is modified to correspond to the number of candidate subfigures. The normalization factor used to compute the score will be the maximum between the number of subfigures in the ground truth and the number of candidate subfigures
  • In this case, all 3 subfigures from the ground truth are correctly matched, but two superfluous figures are present (hence a score of 3/5)
Bonus example :
Having a single candidate subfigure spanning the whole image will often result in a score of 0.0, since the subfigure won't be contained to >=66% in any of the ground truth subfigures.

Results

Modality classification

RunsGroup nameRun typeCorrectly classified in %
Mixed
IBM_modality_run8IBMAutomatic81.68
results_mixed_finki_run3FINKIAutomatic78.04
AllCenter for Informatics and Information TechnologiesAutomatic72.92
IBM_modality_run9IBMAutomatic69.82
medgift2013_mc_mixed_k8medGIFTAutomatic69.63
medgift2013_mc_mixed_sem_k8medGIFTAutomatic69.63
nlm_mixed_using_2013_visual_classification_2ITIAutomatic69.28
nlm_mixed_using_2013_visual_classification_1ITIAutomatic68.74
nlm_mixed_hierarchyITIAutomatic67.31
nlm_mixed_using_2012_visual_classificationITIAutomatic67.07
DEMIR_MC_5DEMIRAutomatic64.60
DEMIR_MC_3DEMIRAutomatic64.48
DEMIR_MC_6DEMIRAutomatic64.09
DEMIR_MC_4DEMIRAutomatic63.67
medgift2013_mc_mixed_exp_sep_sem_k21medGIFTAutomatic62.27
IPL13_mod_cl_mixed_r2IPLAutomatic61.03
IBM_modality_run10IBMAutomatic60.34
IPL13_mod_cl_mixed_r3IPLAutomatic58.98
medgift2013_mc_mixed_exp_k21medGIFTAutomatic47.83
medgift2013_mc_mixed_exp_sem_k21medGIFTAutomatic47.83
All_NoCombCenter for Informatics and Information TechnologiesAutomatic44.61
IPL13_mod_cl_mixed_r1IPLAutomatic09.56
Textual
IBM_modality_run1IBMAutomatic64.17
results_text_finki_run2FINKIAutomatic63.71
DEMIR_MC_1DEMIRAutomatic62.70
DEMIR_MC_2DEMIRAutomatic62.70
wordsCenter for Informatics and Information TechnologiesAutomatic62.35
medgift2013_mc_text_k8medGIFTAutomatic62.04
nlm_textual_only_flatITIAutomatic51.23
IBM_modality_run2IBMAutomatic39.07
words_noCombCenter for Informatics and Information TechnologiesAutomatic32.80
IPL13_mod_cl_textual_r1IPLAutomatic09.02
Visual
IBM_modality_run4IBMAutomatic80.79
IBM_modality_run5IBMAutomatic80.01
IBM_modality_run6IBMAutomatic79.82
IBM_modality_run7IBMAutomatic78.89
results_visual_finki_run1FINKIAutomatic77.14
results_visual_compound_finki_run4FINKIAutomatic76.29
IBM_modality_run3IBMAutomatic75.94
sari_modality_baselineMIILabAutomatic66.46
sari_modality_CCTBB_DRxxDictMIILabAutomatic65.60
medgift2013_mc_5fmedGIFTAutomatic63.78
nlm_visual_only_hierarchyITIAutomatic61.50
medgift2013_mc_5f_exp_separate_k21medGIFTAutomatic61.03
medgift2013_mc_5f_separatemedGIFTAutomatic59.25
CEDD_FCTHCenter for Informatics and Information TechnologiesAutomatic57.62
IPL13_mod_cl_visual_r2IPLAutomatic52.05
medgift2013_mc_5f_exp_k8medGIFTAutomatic45.42
IPL13_mod_cl_visual_r3IPLAutomatic43.33
CEDD_FCTH_NoCombCenter for Informatics and Information TechnologiesAutomatic32.49
IPL13_mod_cl_visual_r1IPLAutomatic06.19

Compound figure separation

RunsGroup nameRun typeCorrectly classified in %
ImageCLEF2013_CompoundFigureSeparation_HESSO_CFSmedGIFTVisual84.64
nlm_multipanel_separationITIMixed69.27
fcse-final-noemptyFINKI68.59
ImageCLEF2013_CompoundFigureSeparation_HESSO_REGIONDETECTOR_SCALE50_STANDARDmedGIFTVisual46.82

Ad-hoc image-based retrieval

RunidRetrieval typeMAPGM-MAPbprefP10P30
nlm-se-image-based-mixedMixed0.31960.10180.29830.38860.2686
Txt_Img_Wighted_MergeMixed0.31240.09710.30140.38860.279
Merge_RankToScore_weightedMixed0.3120.10010.2950.37710.2686
Txt_Img_Wighted_MergeMixed0.30860.09420.29380.38570.259
Merge_RankToScore_weightedMixed0.30320.09890.28720.39430.2705
medgift_mixed_rerank_closeMixed0.24650.05670.24970.32290.2524
medgift_mixed_rerank_nofilterMixed0.23750.05390.23070.28860.2238
medgift_mixed_weighted_nofilterMixed0.23090.05670.21970.280.2181
medgift_mixed_rerank_prefixMixed0.22710.0470.22890.28860.2362
DEMIR3Mixed0.21680.03450.22550.31430.1914
DEMIR10Mixed0.15830.02920.17750.27710.1867
DEMIR7Mixed0.02250.00030.03550.05430.0543
nlm-se-image-based-textualTextual0.31960.10180.29820.38860.2686
IPL13_textual_r6Textual0.25420.04220.24790.33140.2333
BM25b1.1Textual0.25070.04430.24970.320.2238
finkiTextual0.24790.05150.23360.30570.2181
medgift_text_closeTextual0.24780.05870.25130.31140.241
finkiTextual0.24640.05080.23380.31140.22
BM25b1.1Textual0.24350.0430.24240.33140.2248
BM25b1.1Textual0.24350.0430.24240.33140.2248
IPL13_textual_r4Textual0.240.06070.23730.28570.2143
IPL13_textual_r1Textual0.23550.05830.23070.27710.2095
IPL13_textual_r8Textual0.23550.05790.23580.280.2171
IPL13_textual_r8bTextual0.23550.05790.23580.280.2171
IPL13_textual_r3Textual0.23540.06040.22940.27710.2124
IPL13_textual_r2Textual0.2350.05830.2290.27710.2105
FCT_SOLR_BM25L_MSHTextual0.23050.04820.23160.29710.2181
medgift_text_nofilterTextual0.22810.0530.22690.28570.2133
IPL13_textual_r5Textual0.22660.04310.22850.27430.2086
medgift_text_prefixTextual0.22260.0470.22350.29430.2305
FCT_SOLR_BM25LTextual0.220.04760.2280.26570.2114
DEMIR9Textual0.20030.03520.21580.29430.1952
DEMIR1Textual0.19510.02890.20360.27140.1895
DEMIR6Textual0.19510.02890.20360.27140.1895
SNUMedinfo11Textual0.180.02660.18660.26570.1895
DEMIR8Textual0.15780.02670.17120.27140.1733
finkiTextual0.14560.02440.1480.20.1286
IBM_image_run_1Textual0.08480.00720.08760.15140.1038
DEMIR4Visual0.01850.00050.03610.06290.0581
medgift_visual_nofilterVisual0.01330.00040.02560.05710.0448
medgift_visual_closeVisual0.01320.00040.02560.05430.0438
medgift_visual_prefixVisual0.01290.00040.02530.060.0467
IPL13_visual_r6Visual0.01190.00030.02290.03710.0286
image_latefusion_mergeVisual0.0110.00030.02070.02570.0314
DEMIR5Visual0.0110.00040.02570.040.0448
image_latefusion_merge_filterVisual0.01010.00030.02440.03430.0324
latefusuon_accuracy_mergeVisual0.00920.00030.01790.03140.0286
IPL13_visual_r3Visual0.00870.00030.01730.02860.0257
sari_SURFContext_HI_baselineVisual0.00860.00030.01810.04290.0352
IPL13_visual_r8Visual0.00860.00030.01730.02860.0257
IPL13_visual_r5Visual0.00850.00030.01780.03140.0257
IPL13_visual_r1Visual0.00830.00020.01760.03140.0257
IPL13_visual_r4Visual0.00810.00020.01820.040.0305
IPL13_visual_r7Visual0.00790.00030.01750.02570.0267
FCT_SEGHIST_6x6_LBPVisual0.00720.00010.01510.03430.0267
IPL13_visual_r2Visual0.00710.00010.01620.02570.0257
IBM_image_run_min_minVisual0.00620.00020.0160.02860.0267
DEMIR2Visual0.00440.00020.01520.02290.0229
SNUMedinfo13Visual0.00430.00020.01260.02290.0181
SNUMedinfo12Visual0.00330.00010.01530.02570.0219
IBM_image_run_Mnozero17Visual0.0030.00010.00890.020.0105
SNUMedinfo14Visual0.00230.00020.0090.01710.0124
SNUMedinfo15Visual0.00190.00020.00740.00860.0114
IBM_image_run_Mavg7Visual0.00150.00010.00820.01710.0114
IBM_image_run_Mnozero11Visual0.000800.00450.00570.0095
nlm-se-image-based-visualVisual0.000200.00210.00290.001

Case-based retrieval

RunidRetrieval typeMAPGM-MAPbprefP10P30
FCT_CB_MM_rCombMixed0.16080.07790.14260.180.1257
medgift_mixed_nofilter_casebasedMixed0.14670.08830.13180.19710.1457
nlm-se-case-based-mixedMixed0.08860.03030.09260.14570.0962
FCT_CB_MM_MNZMixed0.07940.00350.0850.13710.081
SNUMedinfo9Textual0.24290.11630.24170.26570.1981
SNUMedinfo8Textual0.23890.12790.23230.26860.1933
SNUMedinfo5Textual0.23880.12660.22590.25430.1857
SNUMedinfo6Textual0.23740.11120.23040.24860.1933
FCT_LUCENE_BM25L_MSH_PRFTextual0.22330.11770.20440.260.18
SNUMedinfo4Textual0.22280.12810.21750.23430.1743
SNUMedinfo1Textual0.2210.12080.19520.23430.1619
SNUMedinfo2Textual0.21970.09960.18610.22570.1486
SNUMedinfo7Textual0.21720.12660.21160.24860.1771
FCT_LUCENE_BM25L_PRFTextual0.19920.09640.18740.23430.1781
SNUMedinfo10Textual0.18270.11460.17490.21430.1581
HES-SO-VS_FULLTEXT_LUCENETextual0.17910.11070.1630.21430.1581
SNUMedinfo3Textual0.17510.06060.15720.21140.1286
ITEC_FULLTEXTTextual0.16890.07340.17310.22290.1552
ITEC_FULLPLUSTextual0.16880.0740.1720.21710.1552
ITEC_FULLPLUSMESHTextual0.16630.07470.16340.220.1667
ITEC_MESHEXPANDTextual0.15810.0710.16350.22290.1686
IBM_run_1Textual0.15730.02960.15960.15710.1057
IBM_run_3Textual0.15730.03710.1390.19430.1276
IBM_run_3Textual0.14820.02540.14690.20.141
IBM_run_2Textual0.14760.03080.13630.20860.1295
IBM_run_1Textual0.14030.02160.1380.18290.1238
IBM_run_2Textual0.13060.01530.1340.20.1276
nlm-se-case-based-textualTextual0.08850.03030.09260.14570.0962
DirichletLM_mu2500.0_Bo1bfree_d_3_t_10Textual0.06320.0130.06480.08570.0676
DirichletLM_mu2500.0_Bo1bfree_d_3_t_10Textual0.06320.0130.06480.08570.0676
finkiTextual0.04480.01150.04780.07140.0629
finkiTextual0.04480.01150.04780.07140.0629
DirichletLM_mu2500.0Textual0.04380.01120.0560.08290.0581
DirichletLM_mu2500.0Textual0.04380.01120.0560.08290.0581
finkiTextual0.03760.01050.05040.07710.0562
finkiTextual0.03760.01050.05040.07710.0562
BM25b25.0Textual0.00490.00050.00760.01430.0105
BM25b25.0Textual0.00490.00050.00760.01430.0105
BM25b25.0_Bo1bfree_d_3_t_10Textual0.00480.00050.00710.01430.0105
BM25b25.0_Bo1bfree_d_3_t_10Textual0.00480.00050.00710.01430.0105
FCT_SEGHIST_6x6_LBPVisual0.02810.00090.03350.04290.0238
medgift_visual_nofilter_casebasedVisual0.00290.00010.00350.00860.0067
medgift_visual_close_casebasedVisual0.00290.00010.00360.00860.0076
medgift_visual_prefix_casebasedVisual0.00290.00010.00360.00860.0067
nlm-se-case-based-visualVisual0.00080.00010.00440.00570.0057

Organizers

  • Henning Müller, HES-SO, Switzerland
  • Jayashree Kalpathy-Cramer, Harvard University, USA
  • Dina Demner-Fushman, National Library of Medicine, USA
  • Sameer Antani, National Library of Medicine, USA
  • Alba García Seco de Herrera, HES-SO, Switzerland