You are here




The increasing use of structure-from-motion photogrammetry for modelling large-scale environments from action cameras attached to drones has driven the next-generation of visualisation techniques that can be used in augmented and virtual reality headsets. It has also created a need to have such models labelled, with objects such as people, buildings, vehicles, terrain, etc. all essential for machine learning techniques to automatically identify as areas of interest and to label them appropriately. However, the complexity of the images makes impossible for human annotators to assess the contents of images on a large scale.
Advances in automatically annotating images for complexity and benthic composition have been promising, and we are interested in automatically identify areas of interest and to label them appropriately for monitoring coral reefs. .Coral reefs are in danger of being lost within the next 30 years, and with them the ecosystems they support. This catastrophe will not only see the extinction of many marine species, but also create a humanitarian crisis on a global scale for the billions of humans who rely on reef services. By monitoring the changes and composition of coral reefs we can help prioritise conservation efforts.


  • 2.10.2017: Website goes live.

Preliminary Schedule

  • 05.11.2018: Registration opens for all ImageCLEF tasks (until 26.04.2019)
  • 15.01.2019: Development data release starts
  • 18.03.2019: Test data release starts
  • 01.05.2019: Deadline for submitting the participants runs
  • 13.05.2019: Release of the processed results by the task organizers
  • 24.05.2019: Deadline for submission of working notes papers by the participants
  • 07.06.2019: Notification of acceptance of the working notes papers
  • 28.06.2019: Camera-ready working notes papers
  • 09-12.09.2019CLEF 2019, Lugano, Switzerland

Coral reef image annotation and localisation task

This task requires the participants to label the images with types of benthic substrate together with their bounding box in the image. Each image is provided with possible class types. For each image, participants will produce a set of bounding boxes, predicting the benthic substrate for each bounding box in the image.

Coral reef image pixel-wise parsing task

This task requires the participants to segment and parse each coral reef image into different image regions associated with benthic substrate types. For each image, segmentation algorithms will produce a semantic segmentation mask, predicting the semantic category for each pixel in the image.


Evaluation methodology

...will follow

Participant registration

Please refer to the general ImageCLEF registration instructions

Submission instructions

...will follow



...will follow on 13.05.2019


...will follow