You are here

PlantCLEF 2019

banniere

Registration and data access

  • Each participant has to register on (https://www.crowdai.org) with username, email and password. A representative team name should be used
    as username.
  • In order to be compliant with the CLEF requirements, participants also have to fill in the following additional fields on their profile:
    • First name
    • Last name
    • Affiliation
    • Address
    • City
    • Country
  • Once set up, participants will have access to the CrowdAI PlantCLEF 2019 challenge's page

  • Usage scenario

    Automated identification of plants has improved considerably in the last few years. In the scope of LifeCLEF 2017 and 2018 in particular, we measured impressive identification performance over 10K species. However, these 10K species, mostly living in Europe and North America, only represent the tip of the iceberg. The vast majority of the species in the world (~369K species) actually lives in data deficient countries and the performance of state-of-the-art machine learning algorithms on these species is unknown and presumably much lower. Thus, the main novelty of the 2019 edition of PlantCLEF will be to extend the challenge to the flora of such data deficient countries. Thus, the main focus of the 2019 edition of PlantCLEF will be to evaluate automated identification on the flora of such data deficient regions.

    Challenge description

    The goal of the task is return the most likely species for each observation of the test set (an observation being a set of images of the same individual plant and the associated metadata such as date, gps, author). A small part of the observations in the test set will be re-annotated by several experts so as to allow comparing the performance of the evaluated systems with the one of highly skilled experts.

    Data Collection

    We provide a new dataset of 10K species mainly focused on the Guiana shield and the Amazon rainforest (known to be the largest collection of living plants and animal species in the world). The average number of images per species in that new dataset will be much lower than the dataset used in the previous editions of PlantCLEF (about 10 vs. 100). Many species will contain only a few images and some of them might even contain only 1 image.

    Metric

    The used metric is the Mean Reciprocal Rank (MRR). The MRR is a statistic measure for evaluating any process that produces a list of possible responses to a sample of queries ordered by probability of correctness. The reciprocal rank of a query response is the multiplicative inverse of the rank of the first correct answer. The MRR is the average of the reciprocal ranks for the whole test set:

    MRR formula

    where |Q| is the total number of query occurrences in the test set.

    AttachmentSize
    Plain text icon fakerun.txt476 bytes