You are here

ImageCLEFcoral

Motivation

Description

The increasing use of structure-from-motion photogrammetry for modelling large-scale environments from action cameras attached to drones has driven the next-generation of visualisation techniques that can be used in augmented and virtual reality headsets. It has also created a need to have such models labelled, with objects such as people, buildings, vehicles, terrain, etc. all essential for machine learning techniques to automatically identify as areas of interest and to label them appropriately. However, the complexity of the images makes impossible for human annotators to assess the contents of images on a large scale.
Advances in automatically annotating images for complexity and benthic composition have been promising, and we are interested in automatically identify areas of interest and to label them appropriately for monitoring coral reefs. .Coral reefs are in danger of being lost within the next 30 years, and with them the ecosystems they support. This catastrophe will not only see the extinction of many marine species, but also create a humanitarian crisis on a global scale for the billions of humans who rely on reef services. By monitoring the changes and composition of coral reefs we can help prioritise conservation efforts.

News

  • 4.02.2019: Development data released at crowdAI
  • 2.10.2017: Website goes live.

Preliminary Schedule

  • 05.11.2018: Registration opens for all ImageCLEF tasks (until 26.04.2019)
  • 31.01.2019 04.02.2019: Development data release starts
  • 18.03.2019: Test data release starts
  • 01.05.2019: Deadline for submitting the participants runs
  • 13.05.2019: Release of the processed results by the task organizers
  • 24.05.2019: Deadline for submission of working notes papers by the participants
  • 07.06.2019: Notification of acceptance of the working notes papers
  • 28.06.2019: Camera-ready working notes papers
  • 09-12.09.2019CLEF 2019, Lugano, Switzerland

Coral reef image annotation and localisation task

This task requires the participants to label the images with types of benthic substrate together with their bounding box in the image. Each image is provided with possible class types. For each image, participants will produce a set of bounding boxes, predicting the benthic substrate for each bounding box in the image.

Coral reef image pixel-wise parsing task

This task requires the participants to segment and parse each coral reef image into different image regions associated with benthic substrate types. For each image, segmentation algorithms will produce a semantic segmentation mask, predicting the semantic category for each pixel in the image.

Data

The data for this task originates from a growing, large-scale collection of images taken from coral reefs around the world as part of a coral reef monitoring project with the Marine Technology Research Unit at the University of Essex.
Substrates of the same type can have very different morphologies, color variation and patterns. Some of the images contain a white line (scientific measurement tape) that may occlude part of the entity. The quality of the images is variable, some are blurry, and some have poor color balance. This is representative of the Marine Technology Research Unit dataset and all images are useful for data analysis.

The training set contains contains 240 images with 6702 substrates annotated.Two files are provided with ground truth annotations: one based on bounding boxes "imageCLEFcoral2019_annotations_training_task_1" and a more detailed annotation based on bounding polygon "imageCLEFcoral2019_annotations_training_task_2".

Evaluation methodology

The evaluation will be carry out using the PASCAL style metric of intersection over union (IoU), the area of intersection between the foreground in the
output segmentation and the foreground in the ground-truth segmentation, divided by the area of their union.
The final results were both presented both in terms of average performance over all images of all concepts, and also per concept performance over all images.

Participant registration

Please refer to the general ImageCLEF registration instructions

Submission instructions

The submissions will be received through the crowdAI

system.
Participants will be permitted to submit up to 10 runs. External training data is allowed and encouraged.
Each system run will consist of a single ASCII plain text file. The results of each test set should be given in separate lines in the text file. The format of the text file is as follows:
[subtask_id] [image_ID/document_ID] [results]
where [subtask_id] are:

  • 1 for Subtask 1 Coral reef image annotation and localisation
  • 2 for Subtask 2 Coral reef image pixel-wise parsing task

Subtask 1: Coral reef image annotation and localisation

The results of each test set image should be given in separate lines, each line providing only up to 100 localised substrates, with up to 100 localisations of the same substrate expected. The format has characters to separate the elements, colon ':' for the confidence, comma ',' to separate multiple bounding boxes, and 'x' and '+' for the size-offset bounding box format, i.e.:

[subtask_id] [image_ID] [substrate1] [[confidence1,1]:][width1,1]x[height1,1]+[xmin1,1]+[ymin1,1],[[confidence1,2]:][width1,2]x[height1,2]+[xmin1,2]+[ymin1,2],... [substrate2] ...

[confidence] are floating point values 0-1 for which a higher value means a higher score.

For example, in the development set format (notice that there are 2 bounding boxes for substrate c_soft_coral):

  • 2018_0714_112604_057 0 c_hard_coral_branching 1 891 540 1757 1143
  • 2018_0714_112604_057 3 c_soft_coral 1 2724 1368 2825 1507
  • 2018_0714_112604_057 4 c_soft_coral 1 2622 1576 2777 1731

In the submission format, it would be a line as:

  • 1 2018_0714_112604_057 c_hard_coral_branching 0.6:867x +891+1757 c_soft_coral 0.7:102x140+2724+2825,0.3:156x156

Subtask 2: Coral reef image pixel-wise parsing task

Similar to subtask 1, the results of each test set image should be given in separate lines, each line providing only up to 100 localised substrates, with up to 100 localisations of the same substrate expected. The format has characters to separate the elements, colon ':' for the confidence, comma ',' to separate multiple bounding polygons, and 'x' and '+' for the size-offset bounding polygon format, i.e.:

[subtask_id] [image_ID] [substrate1] [[confidence1,1]:][x1,1]+[y1,1]+[x2,1]+[y2,1]+….+[xn,1]+[yn,1],[[confidence1,2][x1,2]+[y1,2]+[x2,2]+[y2,2]+….+[xn,2]+[yn,2] [substrate2] ...

[confidence] are floating point values 0-1 for which a higher value means a higher score and the [xi,yi] represents consecutive points.

For example, in the development set format (notice that there are 2 polygons for substrate c_soft_coral):

  • 2018_0714_112604_057 0 c_hard_coral_branching 1 1757 833 1645 705 1559 598 1442 540 1249 593 1121 679 1020 705 998 844 891 967 966 1122 1137 1143 1324 1122 1468 1074 1655 978
  • 2018_0714_112604_057 3 c_soft_coral 1 2804 1368 2745 1368 2724 1427 2729 1507 2809 1507 2825 1453
  • 2018_0714_112604_057 4 c_soft_coral 1 2697 1576 2638 1592 2638 1608 2622 1667 2654 1694 2713 1731 2777 1731 2777 1635

In the submission format, it would be a line as:

  • 2 2018_0714_112604_057 c_hard_coral_branching 0.6:1757+833+1645+705+1559+598+1442+540+1249+593+1121+679+1020+705+998+844+891+967+966+1122+1137+1143+1324+1122+1468+1074+1655+978 c_soft_coral 0.7:2804+1368+2745+1368+2724+1427+2729+1507+2809+1507+2825+1453,0.3:2697+1576+2638+1592+2638+1608+2622+1667+2654+1694+2713+1731+2777+1731+2777+1635

Contact

Join our mailing list: https://groups.google.com/d/forum/imageclefcoral
Follow @imageclef

Results

...will follow on 13.05.2019

Citations

...will follow