You are here

Results of the ImageCLEF 2008 Medical Image Annotation Task

Results of the Medical Image Annotation Task in ImageCLEF 2008

Run error score
runs/idiap-LOW_MULT_2MARG.run 74.92
runs/idiap-LOW_MULT.run 83.45
runs/idiap-LOW_2MARG.run 83.79
runs/idiap-MCK_MULT_2MARG.run 85.91
runs/idiap-LOW_lbp_siftnew.run 93.20
runs/idiap-SIFTnew.run 100.27
runs/TAU-BIOMED-svm_full.run 105.75
runs/TAU-BIOMED-svm_prob.run 105.86
runs/TAU-BIOMED-svm_vote.run 109.37
runs/TAU-BIOMED-svm_small.run 117.17
runs/idiap-LBP.run 128.58
runs/rwth_mi-baseline.run 182.77
runs/MIRACLE-MIRACLE-3I-0F.run 187.90
runs/MIRACLE-MIRACLE-2I-0F.run 190.38
runs/MIRACLE-MIRACLE-2I-2F.run 190.38
runs/MIRACLE-MIRACLE-3I-2F.run 194.26
runs/GE-GIFT0.9_0.5_vcad_5.run 210.93
runs/GE-GIFT0.9_0.5_vca_5.run 217.34
runs/idiap-MCK_pix_sift_2MARG.run 227.82
runs/GE-GIFT0.9_akNN_2.run 241.11
runs/GE-GIFT0.9_kNN_2.run 251.97
runs/FEIT-1.run 286.48
runs/FEIT-2.run 290.50
runs/idiap-MCK_pix_sift.run 313.01

The correct classification of the 1000 test images is available here and the evaluation tool which is available from the task website can be used to evaluate additional runs.

If you would like to add information regarding the used method, or a link to your groups website to your runs, please feel free to contact me and I will add the information to this site.

Thomas Deselaers

AttachmentSize
test1000.list22.57 KB