You are here

PlantCLEF 2020

banniere

Motivation

For several centuries, botanists have collected, catalogued and systematically stored plant specimens in herbaria. These physical specimens are used to study the variability of species, their phylogenetic relationship, their evolution, or phenological trends. One of the key step in the workflow of botanists and taxonomists is to find the herbarium sheets that correspond to a new specimen observed in the field. This task requires a high level of expertise and can be very tedious. Developing automated tools to facilitate this work is thus of crucial importance. More generally, this will help to convert these invaluable centuries-old materials into FAIR data.

Data collection

The task will rely on a large collection of more than 60,000 herbarium sheets that were collected in French Guyana (i.e. from the Herbier IRD de Guyane ) and digitized in the context of the e-ReColNat project. iDigBio (the US National Resource for Advancing Digitization of Biodiversity Collections) hosts millions of images of herbarium specimens. Several tens of thousands of these images, illustrating the French Guyana flora, will be used for the PlantCLEF task this year. A valuable asset of this collection is that several herbarium sheets are accompanied by a few pictures of the same specimen in the field. For the test set, we will use in-the-field pictures coming different sources including Pl@ntNet and Encyclopedia of Life.

Task description

The challenge will be evaluated as a cross-domain classification task. The training set will consist of herbarium sheets whereas the test set will be composed of field pictures. To enable learning a mapping between the herbarium sheets domain and the field pictures domain, we will provide both herbarium sheets and field pictures for a subset of species. The metrics used for the evaluation of the task will be the classification accuracy and the Mean Reciprocal Rank.

How to participate ?

See registrations instructions here. Fast link to the challenge on AICrowd: PlantCLEF 2020

Reward

The winner of each of the four LifeCLEF 2020 challenges will be offered a cloud credit grants of 5k USD as part of Microsoft's AI for earth program.
Pl@ntNet

Results

The overview paper presenting the results of the challenge is available here (ceur-ws proceeedings)

A total of 7 participating groups submitted 49 runs. Thanks to all of you for your efforts!


plantclef2020results


Team run name Aicrowd name Filename MRR (whole test set) MRR (sub-set of the test set related to species with few training photos in the field)
ITCR PlantNet Run 10 aabab output_ensemble_4_5_6_7 0,180 0,052
ITCR PlantNet Run 9 aabab output_ensemble_5_6 0,170 0,039
ITCR PlantNet Run 8 aabab output_ensemble_6_7 0,167 0,060
ITCR PlantNet Run 6 aabab output_fsda_extra_genus_family_augmented 0,161 0,037
ITCR PlantNet Run 4 aabab output_fsda_extra_ss_augmented 0,148 0,039
ITCR PlantNet Run 7 aabab output_fsda_extra_augmented 0,143 0,036
ITCR PlantNet Run 5 aabab output_fsda_extra_genus_family_ss_augmented 0,134 0,062
Neuon AI Run 7 holmes_chang Run7_all_precrop_mean_emb_cosine_inverse_flip_merged_slow 0,121 0,107
ITCR PlantNet Run 2 aabab output_r50_extra_finetuned_augmented 0,112 0,013
Neuon AI Run 5 holmes_chang Run5_all_precrop_mean_emb_cosine_inverse_flip_crop_merged 0,111 0,108
Neuon AI Run 3 holmes_chang Run3_all_precrop_mean_emb_cosine_inverse_flip_crop 0,103 0,094
Neuon AI Run 2 holmes_chang Run2_all_precrop_mean_emb_cosine_inverse_flip_crop 0,099 0,076
Neuon AI Run 6 holmes_chang Run6_all_precrop_mean_emb_cosine_inverse_flip_slow 0,093 0,066
Neuon AI Run 4 holmes_chang holmes_run1 0,088 0,073
Neuon AI Run 1 holmes_chang Run1_freeze_mean_emb_nocrop 0,081 0,061
ITCR PlantNet Run 3 aabab output_fsda_finetuned_augmented 0,054 0,039
UWB Run 2 picekl 9_sub_CLEF_subCLEF_with-photos-mean (1) 0,039 0,007
UWB Run 3 picekl 9_sub_CLEF_subCLEF_no-photos-mean 0,039 0,007
LU Run 8 heaven Sub_8 0,032 0,016
LU Run 10 heaven Final_Submission 0,032 0,016
LU Run 9 heaven Sub_9 0,032 0,016
Domain Run 2 Domain_run Submission_2 0,031 0,015
Domain Run 6 Domain_run Submission_6 0,029 0,015
Domain Run 4 Domain_run Submission_4 0,028 0,015
To Be Run 10 To_be SUB_e_final 0,028 0,016
To Be Run 9 To_be SUB_e9 0,028 0,014
Domain Run 1 Domain_run Submission_1 0,028 0,007
LU Run 5 heaven Sub_5 0,027 0,008
Domain Run 5 Domain_run Submission_5 0,026 0,014
LU Run 7 heaven Sub_7 0,025 0,007
LU Run 6 heaven Sub_6 0,025 0,008
Domain Run 3 Domain_run Submission_3 0,024 0,015
UWB Run 1 picekl 9_sub_CLEF_subCLEF_with-photos-mean 0,024 0,011
To Be Run 7 To_be SUB_e7 0,019 0,007
Domain Run 7 Domain_run Submission_7 0,019 0,012
To Be Run 2 To_be SUB_e 0,016 0,007
To Be Run 8 To_be SUB_e8 0,015 0,005
To Be Run 6 To_be SUB_e6 0,014 0,009
LU Run 2 heaven Sub_2 0,011 0,004
LU Run 3 heaven Sub_3 0,011 0,004
To Be Run 5 To_be SUB_e5 0,011 0,009
LU Run 4 heaven Sub_4 0,009 0,007
LU Run 1 heaven SUB 0,009 0,006
SSN Run 2 KuroLabs ResNetMax 0,008 0,003
SSN Run 1 KuroLabs ResNetAvg 0,008 0,003
To Be Run 1 To_be SUB 0,006 0,005
To Be Run 3 To_be SUB_e2 0,006 0,005
To Be Run 4 To_be SUB_e4 0,006 0,005
ITCR PlantNet Run 1 aabab output_r50_finetuned_augmented 0,002 0,002

This second graph focuses on the very difficult sub-part of the test set and reorders the submissions according to the second metric.
plantclef2020resultsSecondMetric

Credits

Pl@ntNet      iDigBio