You are here

ImageCLEFcoral

Motivation

Description

The increasing use of structure-from-motion photogrammetry for modelling large-scale environments from action cameras attached to drones has driven the next-generation of visualisation techniques that can be used in augmented and virtual reality headsets. It has also created a need to have such models labelled, with objects such as people, buildings, vehicles, terrain, etc. all essential for machine learning techniques to automatically identify as areas of interest and to label them appropriately. However, the complexity of the images makes impossible for human annotators to assess the contents of images on a large scale.
Advances in automatically annotating images for complexity and benthic composition have been promising, and we are interested in automatically identify areas of interest and to label them appropriately for monitoring coral reefs. .Coral reefs are in danger of being lost within the next 30 years, and with them the ecosystems they support. This catastrophe will not only see the extinction of many marine species, but also create a humanitarian crisis on a global scale for the billions of humans who rely on reef services. By monitoring the changes and composition of coral reefs we can help prioritise conservation efforts.

News

  • 21.06.2020: Results are revised and updated
  • 20.05.2019: Processed results released
  • 20.03.2019: Test data released at crowdAI
  • 20.03.2019: Submission instructions updated
  • 20.03.2019: Training set annotation files updated to included the annotation of the following 5 images: 2018_0714_112605_052, 2018_0714_112540_049, 2018_0714_112535_035, 2018_0714_112532_049, 2018_0714_112502_0242
  • 4.02.2019: Development data released at crowdAI
  • 2.10.2017: Website goes live.

Preliminary Schedule

  • 05.11.2018: Registration opens for all ImageCLEF tasks (until 26.04.2019)
  • 31.01.2019 04.02.2019: Development data release starts
  • 18.03.201920.03.2019: Test data release starts
  • 01.05.201919.05.2019: Deadline for submitting the participants runs
  • 13.05.201920.05.2019: Release of the processed results by the task organizers
  • 24.05.2019: Deadline for submission of working notes papers by the participants
  • 07.06.2019: Notification of acceptance of the working notes papers
  • 28.06.2019: Camera-ready working notes papers
  • 09-12.09.2019CLEF 2019, Lugano, Switzerland

Coral reef image annotation and localisation task

This task requires the participants to label the images with types of benthic substrate together with their bounding box in the image. Each image is provided with possible class types. For each image, participants will produce a set of bounding boxes, predicting the benthic substrate for each bounding box in the image.

Coral reef image pixel-wise parsing task

This task requires the participants to segment and parse each coral reef image into different image regions associated with benthic substrate types. For each image, segmentation algorithms will produce a semantic segmentation mask, predicting the semantic category for each pixel in the image.

Data

The data for this task originates from a growing, large-scale collection of images taken from coral reefs around the world as part of a coral reef monitoring project with the Marine Technology Research Unit at the University of Essex.
Substrates of the same type can have very different morphologies, color variation and patterns. Some of the images contain a white line (scientific measurement tape) that may occlude part of the entity. The quality of the images is variable, some are blurry, and some have poor color balance. This is representative of the Marine Technology Research Unit dataset and all images are useful for data analysis. The images contain annotations of the following 13 types of substrates: Hard Coral – Branching, Hard Coral – Submassive, Hard Coral – Boulder, Hard Coral – Encrusting, Hard Coral – Table, Hard Coral – Foliose, Hard Coral – Mushroom, Soft Coral, Soft Coral – Gorgonian, Sponge, Sponge – Barrel, Fire Coral – Millepora and Algae - Macro or Leaves.

The training set contains contains 240 images with 6670 substrates annotated.Two files are provided with ground truth annotations: one based on bounding boxes "imageCLEFcoral2019_annotations_training_task_1" and a more detailed annotation based on bounding polygon "imageCLEFcoral2019_annotations_training_task_2". The test set contains 200 images.

Evaluation methodology

The evaluation will be carry out using the PASCAL style metric of intersection over union (IoU), the area of intersection between the foreground in the
output segmentation and the foreground in the ground-truth segmentation, divided by the area of their union.
The final results were both presented both in terms of average performance over all images of all concepts, and also per concept performance over all images.

MAP_0.5 Is the localised Mean average precision (MAP) for each submitted method for using the performance measure of IoU >=0.5 of the ground truth

R_0.5 Is the localised mean recall for each submitted method for using the performance measure of IoU >=0.5 of the ground truth

MAP_0 Is the image annotation average for each method with success if the concept is simply detected in the image without any localisation


<\p>

Accuracy per substrate The segmentation accuracy for a substrate will be assessed using the number of correctly labelled pixels of that substrate, divided by the number of pixels labelled with that class (in either the ground truth labelling or the inferred labelling).

Participant registration

Please refer to the general ImageCLEF registration instructions

Submission instructions

The submissions will be received through the crowdAI

system.
Participants will be permitted to submit up to 10 runs. External training data is allowed and encouraged.
Each system run will consist of a single ASCII plain text file. The results of each test set should be given in separate lines in the text file. The format of the text file is as follows:
[image_ID/document_ID] [results]

Coral reef image annotation and localisation

The results of each test set image should be given in separate lines, each line providing only up to 500 localised substrates. The format has characters to separate the elements, semicolon ‘;’ for the substrates, colon ':' for the confidence, comma ',' to separate multiple bounding boxes, and 'x' and '+' for the size-offset bounding box format, i.e.:

[image_ID];[substrate1] [[confidence1,1]:][width1,1]x[height1,1]+[xmin1,1]+[ymin1,1],[[confidence1,2]:][width1,2]x[height1,2]+[xmin1,2]+[ymin1,2],...;[substrate2] ...

[confidence] are floating point values 0-1 for which a higher value means a higher score.

For example, in the development set format (notice that there are 2 bounding boxes for substrate c_soft_coral):

  • 2018_0714_112604_057 0 c_hard_coral_branching 1 891 540 1757 1143
  • 2018_0714_112604_057 3 c_soft_coral 1 2724 1368 2825 1507
  • 2018_0714_112604_057 4 c_soft_coral 1 2622 1576 2777 1731

In the submission format, it would be a line as:

  • 2018_0714_112604_057;c_hard_coral_branching 0.6:867x 604+891+540;c_soft_coral 0.7:102x140+2724+2825,0.3:156x156+2622+1576

Coral reef image pixel-wise parsing task

Similar to subtask 1, the results of each test set image should be given in separate lines, each line providing only up to 500 localised substrates, with up to 500 coordinate localisations of the same substrate expected. The format has characters to separate the elements, semicolon ';' for the substrates, colon ':' for the confidence, comma ',' to separate multiple bounding polygons, and 'x' and '+' for the size-offset bounding polygon format, i.e.:

[image_ID];[substrate1] [[confidence1,1]:][x1,1]+[y1,1]+[x2,1]+[y2,1]+….+[xn,1]+[yn,1],[[confidence1,2][x1,2]+[y1,2]+[x2,2]+[y2,2]+….+[xn,2]+[yn,2];[substrate2] ...

[confidence] are floating point values 0-1 for which a higher value means a higher score and the [xi,yi] represents consecutive points.

For example, in the development set format (notice that there are 2 polygons for substrate c_soft_coral):

  • 2018_0714_112604_057 0 c_hard_coral_branching 1 1757 833 1645 705 1559 598 1442 540 1249 593 1121 679 1020 705 998 844 891 967 966 1122 1137 1143 1324 1122 1468 1074 1655 978
  • 2018_0714_112604_057 3 c_soft_coral 1 2804 1368 2745 1368 2724 1427 2729 1507 2809 1507 2825 1453
  • 2018_0714_112604_057 4 c_soft_coral 1 2697 1576 2638 1592 2638 1608 2622 1667 2654 1694 2713 1731 2777 1731 2777 1635

In the submission format, it would be a line as:

  • 2018_0714_112604_057;c_hard_coral_branching 0.6:1757+833+1645+705+1559+598+1442+540+1249+593+1121+679+1020+705+998+844+891+967+966+1122+1137+1143+1324+1122+1468+1074+1655+978;c_soft_coral 0.7:2804+1368+2745+1368+2724+1427+2729+1507+2809+1507+2825+1453,0.3:2697+1576+2638+1592+2638+1608+2622+1667+2654+1694+2713+1731+2777+1731+2777+1635

Results

Subtask #1: Coral reef image annotation and localisation task

Run Group MAP_0.5 MAP_0.0
27417 HHUD 0.2427 0.4877
27416 HHUD 0.2294 0.5010
27419 HHUD 0.2199 0.4421
27418 HHUD 0.2100 0.4547
27349 VIT 0.1400 0.4310
27348 VIT 0.1344 0.4240
27115 VIT 0.0849 0.4240
27350 VIT 0.0483 0.2871
27347 VIT 0.0410 0.2716 1
27421 HHUD 0.0026 0.2051
27414 HHUD 0.0029 0.2284
27415 HHUD 0.0027 0.2910
27398 HHUD 0.0026 0.2715
27413 HHUD 0.0021 0.2028
27497 ISEC 0.0006 0.0006

Subtask #2: Coral reef image pixel-wise parsing task

Run Group MAP_0.5 MAP_0.0
27500 MTRU 0.0419 0.2398
27343 SOTON 0.0004 0.0484
27324 SOTON 0.0 0.0899
27212 SOTON 0.0 0.0712
27505 HHUD 0.0 0.0

On construction: analysis per substrates coming soon

CEUR Working Notes

  • All participating teams with at least one graded submission, regardless of the score, should submit a CEUR working notes paper.
  • The working notes paper should be submitted using this link:
    https://easychair.org/conferences/?conf=clef2019
    Click on "enter as an author", then select track "ImageCLEF - Multimedia Retrieval in CLEF".
    Add author information, paper title/abstract, keywords, select "Task 3 - ImageCLEFmedical" and upload your working notes paper as pdf.

Citations

When referring to the ImageCLEFmed 2019 coral task general goals, general results, etc. please cite the following publication which will be published by September 2019:

  • Jon Chamberlain, Antonio Campello, Jessica P. Wright, Louis G. Clift, Adrian Clark and Alba García Seco de Herrera. Overview of ImageCLEFcoral 2019 Task, CEUR Workshop Proceedings (CEUR- WS.org), ISSN 1613-0073, http://ceur-ws.org/Vol-2380/.
  • BibTex:
    @Inproceedings{ImageCLEFcoraloverview2019,
    author = {Chamberlain, Jon and Campello, Antonio and Wright, Jessica P. and Clift, Louis G. and Clark, Adrian and Garc\'ia Seco de Herrera, Alba},
    title = {Overview of {ImageCLEFcoral} 2019 Task},
    booktitle = {CLEF2019 Working Notes},
    series = {{CEUR} Workshop Proceedings},
    year = {2019},
    volume = {2380},
    publisher = {CEUR-WS.org $<$http://ceur-ws.org$>$},
    }
  • When referring to the ImageCLEF 2019 lab general goals, general results, etc. please cite the following publication which will be published by September 2019 (also referred to as ImageCLEF general overview):

  • Bogdan Ionescu, Henning Müller, Renaud Péteri, Yashin Dicente Cid, Vitali Liauchuk, Vassili Kovalev, Dzmitri Klimuk, Aleh Tarasau, Asma Ben Abacha, Sadid A. Hasan, Vivek Datla, Joey Liu, Dina Demner-Fushman, Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Minh-Triet Tran, Mathias Lux, Cathal Gurrin, Obioma Pelka, Christoph M. Friedrich, Alba García Seco de Herrera, Narciso Garcia, Ergina Kavallieratou, Carlos Roberto del Blanco, Carlos Cuevas Rodríguez, Nikos Vasillopoulos, Konstantinos Karampidis, Jon Chamberlain, Adrian Clark, Antonio Campello, ImageCLEF 2019: Multimedia Retrieval in Medicine, Lifelogging, Security and Nature In: Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the 10th International Conference of the CLEF Association (CLEF 2019), Lugano, Switzerland, LNCS Lecture Notes in Computer Science, Springer (September 9-12 2019)
  • BibTex:
    @inproceedings{ImageCLEF19,
      author = {Bogdan Ionescu and Henning M\"uller and Renaud P\'{e}teri and Yashin Dicente Cid and Vitali Liauchuk and Vassili Kovalev and Dzmitri Klimuk and Aleh Tarasau and Asma Ben Abacha and Sadid A. Hasan and Vivek Datla and Joey Liu and Dina Demner-Fushman and Duc-Tien Dang-Nguyen and Luca Piras and Michael Riegler and Minh-Triet Tran and Mathias Lux and Cathal Gurrin and Obioma Pelka and Christoph M. Friedrich and Alba Garc\'ia Seco de Herrera and Narciso Garcia and Ergina Kavallieratou and Carlos Roberto del Blanco and Carlos Cuevas Rodr\'{i}guez and Nikos Vasillopoulos and Konstantinos Karampidis and Jon Chamberlain and Adrian Clark and Antonio Campello},
      title = {{ImageCLEF 2019}: Multimedia Retrieval in Medicine, Lifelogging, Security and Nature},
      booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction},
      series = {Proceedings of the 10th International Conference of the CLEF Association (CLEF 2019)},
      year = {2019},
      volume = {},
      publisher = {{LNCS} Lecture Notes in Computer Science, Springer},
      pages = {},
      month = {September 9-12},
      address = {Lugano, Switzerland}
      }
  • Contact

    Join our mailing list: https://groups.google.com/d/forum/imageclefcoral
    Follow @imageclef