You are here

ImageCLEFcoral

Welcome to the 4th edition of the Coral Task!

Description

Motivation

The increasing use of structure-from-motion photogrammetry for modelling large-scale environments from action cameras attached to drones has driven the next-generation of visualisation techniques that can be used in augmented and virtual reality headsets. It has also created a need to have such models labelled, with objects such as people, buildings, vehicles, terrain, etc. all essential for machine learning techniques to automatically identify as areas of interest and to label them appropriately. However, the complexity of the images makes impossible for human annotators to assess the contents of images on a large scale.
Advances in automatically annotating images for complexity and benthic composition have been promising, and we are interested in automatically identify areas of interest and to label them appropriately for monitoring coral reefs. Coral reefs are in danger of being lost within the next 30 years, and with them the ecosystems they support. This catastrophe will not only see the extinction of many marine species, but also create a humanitarian crisis on a global scale for the billions of humans who rely on reef services. By monitoring the changes and composition of coral reefs we can help prioritise conservation efforts.

New for 2022:

Previous editions of ImageCLEFcoral in 2019 and 2020 have shown improvements in task performance and promising results on cross-learning between images from geographical regions. The 3rd edition in 2021 increased the complexity of the task and size of data available to participants through supplemental data, resulting in lower performance than previous years. The 4th edition plans to address these issues by targeting algorithms for geographical regions and raising the benchmark performance. As with the 3rd edition, the training and test data will form the complete set of images required to form 3D reconstructions of the marine environment. This will allow the participants to explore novel probabilistic computer vision techniques based around image overlap and transposition of data points.

3D models for ImageCLEFcoral 2022

The images for each model was collected using a 5-camera array moving over the terrain. The images typically overlap each other by 60% and are likely to contain some of the same features of the landscape taken from many different angles. The images were aligned using Agisoft Metashape and processed into a 3D textured model using "medium" processing settings. The models are available to participants on request (as .obj files).

In addition, participants are encouraged to use the publicly available NOAA NCEI data and/or CoralNet to train their approaches. The CNETcategories_ImageCLEF_v1.xlsx file shows how to map NOAA categories to ImageCLEFcoral categories for training. NB: NOAA data is typically sparse pixel annotation over a large set of images, i.e, only 10 pixels per images are classified.

News

  • 7.02.2022: development data released

Preliminary Schedule

  • 15.11.2021: registration opens for all ImageCLEF tasks
  • 17.01.20227.02.2022: development data released
  • 14.03.2022: test data release starts
  • 06.05.202217.05.2022: deadline for submitting the participants runs
  • 13.05.2022: release of the processed results by the task organizers
  • 27.05.2022: deadline for submission of working notes papers by the participants
  • 13.06.2022: notification of acceptance of the working notes papers
  • 01.07.2022: camera ready working notes papers
  • 05-08.09.2022: CLEF 2022, Bologna, Italy

Task Description

ImageCLEFcoral 2022 consists of two substaks:

Coral reef image annotation and localisation subtask

This subtask requires the participants to label the images with types of benthic substrate together with their bounding box in the image. Each image is provided with possible class types. For each image, participants will produce a set of bounding boxes, predicting the benthic substrate for each bounding box in the image.

Coral reef image pixel-wise parsing subtask

This subtask requires the participants to segment and parse each coral reef image into different image regions associated with benthic substrate types. For each image, segmentation algorithms will produce a semantic segmentation mask, predicting the semantic category for each pixel in the image.

Data

The data for this task originates from a growing, large-scale collection of images taken from coral reefs around the world as part of a coral reef monitoring project with the Marine Technology Research Unit at the University of Essex. The images partially overlap with each other and can be used to create 3D photogrammetric models of the marine environment.

Substrates of the same type can have very different morphologies, coloUr variation and patterns. Some of the images contain a white line (scientific measurement tape) that may occlude part of the entity.The quality of the images is variable, some are blurry, and some have poor colour balance due to the cameras being used. This is representative of the Marine Technology Research Unit dataset and all images are useful for data analysis. The training set used for 2022 has undergone a significant review in order to rectify errors in classification and polygon shape. Additionally, the 13 substrate types have been refined to help participants understand the results of their analyses.

Evaluation methodology

The evaluation will be carry out using the PASCAL style metric of intersection over union (IoU), the area of intersection between the foreground in the
output segmentation and the foreground in the ground-truth segmentation, divided by the area of their union.
The final results will be presented both in terms of average performance over all images of all concepts, and also per concept performance over all images.

MAP_0.5 Is the localised Mean average precision (MAP) for each submitted method for using the performance measure of IoU >=0.5 of the ground truth

MAP_0 Is the image annotation average for each method with success if the concept is simply detected in the image without any localisation


<\p>


Accuracy per substrate The segmentation accuracy for a substrate will be assessed using the number of correctly labelled pixels of that substrate, divided by the number of pixels labelled with that class (in either the ground truth labelling or the inferred labelling).

Participant registration

Please refer to the general ImageCLEF registration instructions

Submission instructions

The submissions will be received through the crowdAI

system.
Participants will be permitted to submit up to 10 runs. External training data is allowed and encouraged.
Each system run will consist of a single ASCII plain text file. The results of each test set should be given in separate lines in the text file. The format of the text file is as follows:
[image_ID/document_ID] [results]

Coral reef image annotation and localisation

The results of each test set image should be given in separate lines, each line providing only up to 500 localised substrates. The format has characters to separate the elements, semicolon ‘;’ for the substrates, colon ':' for the confidence, comma ',' to separate multiple bounding boxes, and 'x' and '+' for the size-offset bounding box format, i.e.:

[image_ID];[substrate1] [[confidence1,1]:][width1,1]x[height1,1]+[xmin1,1]+[ymin1,1],[[confidence1,2]:][width1,2]x[height1,2]+[xmin1,2]+[ymin1,2],...;[substrate2] ...

[confidence] are floating point values 0-1 for which a higher value means a higher score.

For example, in the development set format (notice that there are 2 bounding boxes for substrate c_soft_coral):

  • 2018_0714_112604_057 0 c_hard_coral_branching 1 891 540 1757 1143
  • 2018_0714_112604_057 3 c_soft_coral 1 2724 1368 2825 1507
  • 2018_0714_112604_057 4 c_soft_coral 1 2622 1576 2777 1731

In the submission format, it would be a line as:

  • 2018_0714_112604_057;c_hard_coral_branching 0.6:867x 604+891+540;c_soft_coral 0.7:102x140+2724+2825,0.3:156x156+2622+1576

Coral reef image pixel-wise parsing subtask

Similar to subtask 1, the results of each test set image should be given in separate lines, each line providing only up to 500 localised substrates, with up to 500 coordinate localisations of the same substrate expected. The format has characters to separate the elements, semicolon ';' for the substrates, colon ':' for the confidence, comma ',' to separate multiple bounding polygons, and 'x' and '+' for the size-offset bounding polygon format, i.e.:

[image_ID];[substrate1] [[confidence1,1]:][x1,1]+[y1,1]+[x2,1]+[y2,1]+….+[xn,1]+[yn,1],[[confidence1,2][x1,2]+[y1,2]+[x2,2]+[y2,2]+….+[xn,2]+[yn,2];[substrate2] ...

[confidence] are floating point values 0-1 for which a higher value means a higher score and the [xi,yi] represents consecutive points.

For example, in the development set format (notice that there are 2 polygons for substrate c_soft_coral):

  • 2018_0714_112604_057 0 c_hard_coral_branching 1 1757 833 1645 705 1559 598 1442 540 1249 593 1121 679 1020 705 998 844 891 967 966 1122 1137 1143 1324 1122 1468 1074 1655 978
  • 2018_0714_112604_057 3 c_soft_coral 1 2804 1368 2745 1368 2724 1427 2729 1507 2809 1507 2825 1453
  • 2018_0714_112604_057 4 c_soft_coral 1 2697 1576 2638 1592 2638 1608 2622 1667 2654 1694 2713 1731 2777 1731 2777 1635

In the submission format, it would be a line as:

  • 2018_0714_112604_057;c_hard_coral_branching 0.6:1757+833+1645+705+1559+598+1442+540+1249+593+1121+679+1020+705+998+844+891+967+966+1122+1137+1143+1324+1122+1468+1074+1655+978;c_soft_coral 0.7:2804+1368+2745+1368+2724+1427+2729+1507+2809+1507+2825+1453,0.3:2697+1576+2638+1592+2638+1608+2622+1667+2654+1694+2713+1731+2777+1731+2777+1635

Participant registration

Please refer to the general ImageCLEF registration instructions

Submission instructions

The submissions will be received through the crowdAI

system.
Participants will be permitted to submit up to 10 runs. External training data is allowed and encouraged.
Each system run will consist of a single ASCII plain text file. The results of each test set should be given in separate lines in the text file.

Results

Subtask #1: Coral reef image annotation and localisation task

Run Group MAP_0.5 MAP_0.0
183919 HHU 0.396 0.752
183914 HHU 0.371 0.726
183920 HHU 0.366 0.686
183911 HHU 0.365 0.721
183922 HHU 0.336 0.697
183912 HHU 0.318 0.646
183916 HHU 0.305 0.654
183913 HHU 0.297 0.661
183918 HHU 0.291 0.661
185373 UTK 0.003 0.327
184144 UTK 0.001 0.305

CEUR Working Notes

Citations

When referring to the ImageCLEF 2022 coral task general goals, general results, etc. please cite the following publication which will be published in September 2022:

BibTex:
@inproceedings{ImageCLEFcoral2022,

author = { Chamberlain, Jon and Garc\’ia Seco

de Herrera, Alba and Campello, Antonio and Clark, Adrian},

title = {{ImageCLEFcoral} task: Coral reef image annotation and localisation },

booktitle = {Experimental IR Meets Multilinguality, Multimodality, and

Interaction},

series = {Proceedings of the 13th International Conference of the CLEF

Association (CLEF 2022)},

year = 2022,

volume = {},

publisher = {{LNCS} Lecture Notes in Computer Science, Springer},

pages = {},

month = {September 5-8},

address = {Bologna, Italy}

}

When referring to the ImageCLEF 2022 lab general goals, general results, etc. please cite the following publication (also referred to as ImageCLEF general overview) which will be published in September 2022:

@inproceedings{ImageCLEF2022,
author = {Bogdan Ionescu and Henning M\"uller and Renaud P\’{e}teri
and Johannes R\"uckert and Asma {Ben Abacha} and Alba Garc\’{\i}a Seco
de Herrera and Christoph M. Friedrich and Louise Bloch and Raphael
Br\"ungel and Ahmad Idrissi-Yaghir and Henning Sch\"afer and Serge
Kozlovski and Yashin Dicente Cid and Vassili Kovalev and Liviu-Daniel
\c{S}tefan and Mihai Gabriel Constantin and Mihai Dogariu and Adrian
Popescu and J\'er\^ome Deshayes-Chossart and Hugo Schindler and Jon
Chamberlain and Antonio Campello and Adrian Clark},
title = {{Overview of the ImageCLEF 2022}: Multimedia Retrieval in
Medical, Social Media and Nature Applications},
booktitle = {Experimental IR Meets Multilinguality, Multimodality, and
Interaction},
series = {Proceedings of the 13th International Conference of the CLEF
Association (CLEF 2022)},
year = 2022,
volume = {},
publisher = {{LNCS} Lecture Notes in Computer Science, Springer},
pages = {},
month = {September 5-8},
address = {Bologna, Italy}
}

Contact

Join our mailing list: https://groups.google.com/d/forum/imageclefcoral
Follow @imageclef