You are here


Welcome to the 2nd edition of the Coral Task!



The increasing use of structure-from-motion photogrammetry for modelling large-scale environments from action cameras attached to drones has driven the next-generation of visualisation techniques that can be used in augmented and virtual reality headsets. It has also created a need to have such models labelled, with objects such as people, buildings, vehicles, terrain, etc. all essential for machine learning techniques to automatically identify as areas of interest and to label them appropriately. However, the complexity of the images makes impossible for human annotators to assess the contents of images on a large scale.
Advances in automatically annotating images for complexity and benthic composition have been promising, and we are interested in automatically identify areas of interest and to label them appropriately for monitoring coral reefs. Coral reefs are in danger of being lost within the next 30 years, and with them the ecosystems they support. This catastrophe will not only see the extinction of many marine species, but also create a humanitarian crisis on a global scale for the billions of humans who rely on reef services. By monitoring the changes and composition of coral reefs we can help prioritise conservation efforts.


Preliminary Schedule

More information will be added soon!


More information will be added soon!

Evaluation methodology

More information will be added soon!

Participant registration

Please refer to the general ImageCLEF registration instructions

Submission instructions

The submissions will be received through the crowdAI

Participants will be permitted to submit up to 10 runs. External training data is allowed and encouraged.
Each system run will consist of a single ASCII plain text file. The results of each test set should be given in separate lines in the text file.

CEUR Working Notes


More information will be added soon!


Join our mailing list:
Follow @imageclef