You are here

ImageCLEFcoral

Motivation

Description

The increasing use of structure-from-motion photogrammetry for modelling large-scale environments from action cameras attached to drones has driven the next-generation of visualisation techniques that can be used in augmented and virtual reality headsets. It has also created a need to have such models labelled, with objects such as people, buildings, vehicles, terrain, etc. all essential for machine learning techniques to automatically identify as areas of interest and to label them appropriately. However, the complexity of the images makes impossible for human annotators to assess the contents of images on a large scale.
Advances in automatically annotating images for complexity and benthic composition have been promising, and we are interested in automatically identify areas of interest and to label them appropriately for monitoring coral reefs. .Coral reefs are in danger of being lost within the next 30 years, and with them the ecosystems they support. This catastrophe will not only see the extinction of many marine species, but also create a humanitarian crisis on a global scale for the billions of humans who rely on reef services. By monitoring the changes and composition of coral reefs we can help prioritise conservation efforts.

News

  • 20.05.2019: Processed results released
  • 20.03.2019: Test data released at crowdAI
  • 20.03.2019: Submission instructions updated
  • 20.03.2019: Training set annotation files updated to included the annotation of the following 5 images: 2018_0714_112605_052, 2018_0714_112540_049, 2018_0714_112535_035, 2018_0714_112532_049, 2018_0714_112502_0242
  • 4.02.2019: Development data released at crowdAI
  • 2.10.2017: Website goes live.

Preliminary Schedule

  • 05.11.2018: Registration opens for all ImageCLEF tasks (until 26.04.2019)
  • 31.01.2019 04.02.2019: Development data release starts
  • 18.03.201920.03.2019: Test data release starts
  • 01.05.201919.05.2019: Deadline for submitting the participants runs
  • 13.05.201920.05.2019: Release of the processed results by the task organizers
  • 24.05.2019: Deadline for submission of working notes papers by the participants
  • 07.06.2019: Notification of acceptance of the working notes papers
  • 28.06.2019: Camera-ready working notes papers
  • 09-12.09.2019CLEF 2019, Lugano, Switzerland

Coral reef image annotation and localisation task

This task requires the participants to label the images with types of benthic substrate together with their bounding box in the image. Each image is provided with possible class types. For each image, participants will produce a set of bounding boxes, predicting the benthic substrate for each bounding box in the image.

Coral reef image pixel-wise parsing task

This task requires the participants to segment and parse each coral reef image into different image regions associated with benthic substrate types. For each image, segmentation algorithms will produce a semantic segmentation mask, predicting the semantic category for each pixel in the image.

Data

The data for this task originates from a growing, large-scale collection of images taken from coral reefs around the world as part of a coral reef monitoring project with the Marine Technology Research Unit at the University of Essex.
Substrates of the same type can have very different morphologies, color variation and patterns. Some of the images contain a white line (scientific measurement tape) that may occlude part of the entity. The quality of the images is variable, some are blurry, and some have poor color balance. This is representative of the Marine Technology Research Unit dataset and all images are useful for data analysis. The images contain annotations of the following 13 types of substrates: Hard Coral – Branching, Hard Coral – Submassive, Hard Coral – Boulder, Hard Coral – Encrusting, Hard Coral – Table, Hard Coral – Foliose, Hard Coral – Mushroom, Soft Coral, Soft Coral – Gorgonian, Sponge, Sponge – Barrel, Fire Coral – Millepora and Algae - Macro or Leaves.

The training set contains contains 240 images with 6670 substrates annotated.Two files are provided with ground truth annotations: one based on bounding boxes "imageCLEFcoral2019_annotations_training_task_1" and a more detailed annotation based on bounding polygon "imageCLEFcoral2019_annotations_training_task_2". The test set contains 200 images.

Evaluation methodology

The evaluation will be carry out using the PASCAL style metric of intersection over union (IoU), the area of intersection between the foreground in the
output segmentation and the foreground in the ground-truth segmentation, divided by the area of their union.
The final results were both presented both in terms of average performance over all images of all concepts, and also per concept performance over all images.

MAP_0.5 Is the localised Mean average precision (MAP) for each submitted method for using the performance measure of IoU >=0.5 of the ground truth

R_0.5 Is the localised mean recall for each submitted method for using the performance measure of IoU >=0.5 of the ground truth

MAP_0 Is the image annotation average for each method with success if the concept is simply detected in the image without any localisation


Accuracy per substrate The segmentation accuracy for a substrate will be assessed using the number of correctly labelled pixels of that substrate, divided by the number of pixels labelled with that class (in either the ground truth labelling or the inferred labelling).

Participant registration

Please refer to the general ImageCLEF registration instructions

Submission instructions

The submissions will be received through the crowdAI

system.
Participants will be permitted to submit up to 10 runs. External training data is allowed and encouraged.
Each system run will consist of a single ASCII plain text file. The results of each test set should be given in separate lines in the text file. The format of the text file is as follows:
[image_ID/document_ID] [results]

Coral reef image annotation and localisation

The results of each test set image should be given in separate lines, each line providing only up to 500 localised substrates. The format has characters to separate the elements, semicolon ‘;’ for the substrates, colon ':' for the confidence, comma ',' to separate multiple bounding boxes, and 'x' and '+' for the size-offset bounding box format, i.e.:

[image_ID];[substrate1] [[confidence1,1]:][width1,1]x[height1,1]+[xmin1,1]+[ymin1,1],[[confidence1,2]:][width1,2]x[height1,2]+[xmin1,2]+[ymin1,2],...;[substrate2] ...

[confidence] are floating point values 0-1 for which a higher value means a higher score.

For example, in the development set format (notice that there are 2 bounding boxes for substrate c_soft_coral):

  • 2018_0714_112604_057 0 c_hard_coral_branching 1 891 540 1757 1143
  • 2018_0714_112604_057 3 c_soft_coral 1 2724 1368 2825 1507
  • 2018_0714_112604_057 4 c_soft_coral 1 2622 1576 2777 1731

In the submission format, it would be a line as:

  • 2018_0714_112604_057;c_hard_coral_branching 0.6:867x 604+891+540;c_soft_coral 0.7:102x140+2724+2825,0.3:156x156+2622+1576

Coral reef image pixel-wise parsing task

Similar to subtask 1, the results of each test set image should be given in separate lines, each line providing only up to 500 localised substrates, with up to 500 coordinate localisations of the same substrate expected. The format has characters to separate the elements, semicolon ';' for the substrates, colon ':' for the confidence, comma ',' to separate multiple bounding polygons, and 'x' and '+' for the size-offset bounding polygon format, i.e.:

[image_ID];[substrate1] [[confidence1,1]:][x1,1]+[y1,1]+[x2,1]+[y2,1]+….+[xn,1]+[yn,1],[[confidence1,2][x1,2]+[y1,2]+[x2,2]+[y2,2]+….+[xn,2]+[yn,2];[substrate2] ...

[confidence] are floating point values 0-1 for which a higher value means a higher score and the [xi,yi] represents consecutive points.

For example, in the development set format (notice that there are 2 polygons for substrate c_soft_coral):

  • 2018_0714_112604_057 0 c_hard_coral_branching 1 1757 833 1645 705 1559 598 1442 540 1249 593 1121 679 1020 705 998 844 891 967 966 1122 1137 1143 1324 1122 1468 1074 1655 978
  • 2018_0714_112604_057 3 c_soft_coral 1 2804 1368 2745 1368 2724 1427 2729 1507 2809 1507 2825 1453
  • 2018_0714_112604_057 4 c_soft_coral 1 2697 1576 2638 1592 2638 1608 2622 1667 2654 1694 2713 1731 2777 1731 2777 1635

In the submission format, it would be a line as:

  • 2018_0714_112604_057;c_hard_coral_branching 0.6:1757+833+1645+705+1559+598+1442+540+1249+593+1121+679+1020+705+998+844+891+967+966+1122+1137+1143+1324+1122+1468+1074+1655+978;c_soft_coral 0.7:2804+1368+2745+1368+2724+1427+2729+1507+2809+1507+2825+1453,0.3:2697+1576+2638+1592+2638+1608+2622+1667+2654+1694+2713+1731+2777+1731+2777+1635

Results

...on construction

Subtask #1: Coral reef image annotation and localisation task

Run Group MAP_0.5 R_0.5 MAP_0
27417 HHUD 0.24266483 0.130912 0.48774594
27416 HHUD 0.229412 0.130726 0.50098039
27419 HHUD 0.219865 0.121601 0.44208754
27418 HHUD 0.209968 0.12160149 0.45466238
27349 VIT 0.139962 0.068156 0.43097514
27348 VIT 0.134396 0.072253 0.42396952
27115 VIT 0.084863 0.045624 0.42396952
27350 VIT 0.048321 0.028678 0.28710386
27347 VIT 0.040993 0.027374 0.27161182
27414 HHUD 0.002854 0.004283 0.22841191
27415 HHUD 0.002706 0.004469 0.29101364
27398 HHUD 0.002618 0.004283 0.27151639
27421 HHUD 0.0026 0.003724 0.20514821
27413 HHUD 0.002104 0.00298 0.20281468
27497 ISEC 5.97E-04 5.59E-04 0.00059678
Accuracy per substrate
Run Group hard_coral_branching hard_coral_submassive hard_coral_boulder hard_coral_encrusting hard_coral_table hard_coral_foliose hard_coral_mushroom soft_coral soft_coral_gorgonian sponge sponge_barrel fire_coral_millepora algae_macro_or_leaves
27115 VIT 0.0436 0.0000 0.0809 0.0168 0.0000 0.0128 0.0664 0.0722 0.0000 0.0349 0.0526 0.0000 0.0000
27347 VIT 0.0456 0.0000 0.0374 0.0055 0.0000 0.0000 0.0204 0.0918 0.0000 0.0239 0.0498 0.0000 0.0000
27348 VIT 0.0548 0.0000 0.0956 0.0171 0.0000 0.0129 0.1190 0.0782 0.0000 0.0365 0.0579 0.0000 0.0000
27349 VIT 0.0637 0.0000 0.1012 0.0195 0.0000 0.0028 0.0758 0.0804 0.0000 0.0329 0.0619 0.0000 0.0004
27350 VIT 0.0597 0.0000 0.0305 0.0141 0.0000 0.0000 0.0422 0.0808 0.0000 0.0299 0.0598 0.0000 0.0000
27398 HHUD 0.0013 0.0000 0.0116 0.0000 0.0000 0.0000 0.0000 0.0702 0.0000 0.0020 0.0000 0.0000 0.0000
27413 HHUD 0.0068 0.0021 0.0063 0.0014 0.0000 0.0022 0.0000 0.0523 0.0000 0.0063 0.0000 0.0000 0.0000
27414 HHUD 0.0089 0.0000 0.0160 0.0015 0.0000 0.0000 0.0000 0.0562 0.0000 0.0104 0.0054 0.0000 0.0000
27415 HHUD 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0731 0.0000 0.0000 0.0000 0.0000 0.0000
27416 HHUD 0.0346 0.0000 0.0343 0.0064 0.0000 0.0000 0.0437 0.0550 0.0008 0.0094 0.0222 0.0000 0.0158
27417 HHUD 0.0356 0.0000 0.0330 0.0069 0.0000 0.0000 0.0406 0.0505 0.0008 0.0094 0.0213 0.0000 0.0157
27418 HHUD 0.0246 0.0000 0.0321 0.0038 0.0000 0.0000 0.0269 0.0447 0.0005 0.0086 0.0318 0.0000 0.0012
27419 HHUD 0.0261 0.0000 0.0315 0.0042 0.0000 0.0023 0.0228 0.0423 0.0005 0.0088 0.0304 0.0000 0.0012
27421 HHUD 0.0007 0.0000 0.0167 0.0048 0.0000 0.0006 0.0106 0.0571 0.0000 0.0072 0.0000 0.0000 0.0000
27497 ISEC 0.0198 0.0000 0.0007 0.0121 0.0000 0.0000 0.0000 0.0079 0.0000 0.0277 0.0000 0.0000 0.0000

Subtask #2: Coral reef image pixel-wise parsing task

Run Group MAP_0.5 R_0.5 MAP_0
27500 MTRU 0.041932398 0.048975791 0.239795918
27343 SOTON 3.56E-04 0.001489758 0.048377056
27324 SOTON 0 0 0.089948705
27212 SOTON 0 0 0.071183721
27505 HHUD 0 0 0
Accuracy per substrate
Run Group hard_coral_branching hard_coral_submassive hard_coral_boulder hard_coral_encrusting hard_coral_table hard_coral_foliose hard_coral_mushroom soft_coral soft_coral_gorgonian sponge sponge_barrel fire_coral_millepora algae_macro_or_leaves
27212 SOTON 0.0121 0.0047 0.0153 0.0188 0.0000 0.0042 0.0021 0.0851 0.0000 0.0000 0.0156 0.0000 0.0008
27324 SOTON 0.0262 0.0296 0.0183 0.0188 0.0138 0.0135 0.0025 0.0851 0.0254 0.0283 0.0135 0.0212 0.0167
27343 SOTON 0.0235 0.0000 0.0230 0.0124 0.0000 0.0012 0.0480 0.1145 0.0000 0.0108 0.0000 0.0038 0.0000
27500 MTRU 0.0958 0.0000 0.1659 0.0446 0.0000 0.0065 0.2190 0.1300 0.0186 0.0573 0.0889 0.0000 0.0007
27505 HHUD 0.0003 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0432 0.0000 0.0000 0.0000 0.0000 0.0000

CEUR Working Notes

  • All participating teams with at least one graded submission, regardless of the score, should submit a CEUR working notes paper.
  • The working notes paper should be submitted using this link:
    https://easychair.org/conferences/?conf=clef2019
    Click on "enter as an author", then select track "ImageCLEF - Multimedia Retrieval in CLEF".
    Add author information, paper title/abstract, keywords, select "Task 3 - ImageCLEFmedical" and upload your working notes paper as pdf.

Citations

When referring to the ImageCLEFmed 2019 coral task general goals, general results, etc. please cite the following publication which will be published by September 2019:

  • Jon Chamberlain, Antonio Campello, Jessica P. Wright, Louis G. Clift, Adrian Clark and Alba García Seco de Herrera. Overview of ImageCLEFcoral 2019 Task, CLEF working notes, CEUR, 2019.
  • BibTex:
    @Inproceedings{ImageCLEFcoraloverview2019,
    author = {Chamberlain, Jon and Campello, Antonio and Wright, Jessica P. and Clift, Louis G. and Clark, Adrian and Garc\'ia Seco de Herrera, Alba},
    title = {Overview of {ImageCLEFcoral} 2019 Task},
    booktitle = {CLEF2019 Working Notes},
    series = {{CEUR} Workshop Proceedings},
    year = {2019},
    volume = {},
    publisher = {CEUR-WS.org $$},
    }
  • When referring to the ImageCLEF 2019 lab general goals, general results, etc. please cite the following publication which will be published by September 2019 (also referred to as ImageCLEF general overview):

  • Bogdan Ionescu, Henning Müller, Renaud Péteri, Yashin Dicente Cid, Vitali Liauchuk, Vassili Kovalev, Dzmitri Klimuk, Aleh Tarasau, Asma Ben Abacha, Sadid A. Hasan, Vivek Datla, Joey Liu, Dina Demner-Fushman, Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Minh-Triet Tran, Mathias Lux, Cathal Gurrin, Obioma Pelka, Christoph M. Friedrich, Alba García Seco de Herrera, Narciso Garcia, Ergina Kavallieratou, Carlos Roberto del Blanco, Carlos Cuevas Rodríguez, Nikos Vasillopoulos, Konstantinos Karampidis, Jon Chamberlain, Adrian Clark, Antonio Campello, ImageCLEF 2019: Multimedia Retrieval in Medicine, Lifelogging, Security and Nature In: Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the 10th International Conference of the CLEF Association (CLEF 2019), Lugano, Switzerland, LNCS Lecture Notes in Computer Science, Springer (September 9-12 2019)
  • BibTex:
    @inproceedings{ImageCLEF19,
      author = {Bogdan Ionescu and Henning M\"uller and Renaud P\'{e}teri and Yashin Dicente Cid and Vitali Liauchuk and Vassili Kovalev and Dzmitri Klimuk and Aleh Tarasau and Asma Ben Abacha and Sadid A. Hasan and Vivek Datla and Joey Liu and Dina Demner-Fushman and Duc-Tien Dang-Nguyen and Luca Piras and Michael Riegler and Minh-Triet Tran and Mathias Lux and Cathal Gurrin and Obioma Pelka and Christoph M. Friedrich and Alba Garc\'ia Seco de Herrera and Narciso Garcia and Ergina Kavallieratou and Carlos Roberto del Blanco and Carlos Cuevas Rodr\'{i}guez and Nikos Vasillopoulos and Konstantinos Karampidis and Jon Chamberlain and Adrian Clark and Antonio Campello},
      title = {{ImageCLEF 2019}: Multimedia Retrieval in Medicine, Lifelogging, Security and Nature},
      booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction},
      series = {Proceedings of the 10th International Conference of the CLEF Association (CLEF 2019)},
      year = {2019},
      volume = {},
      publisher = {{LNCS} Lecture Notes in Computer Science, Springer},
      pages = {},
      month = {September 9-12},
      address = {Lugano, Switzerland}
      }
  • Contact

    Join our mailing list: https://groups.google.com/d/forum/imageclefcoral
    Follow @imageclef