You are here

ImageCLEF 2019


ImageCLEF 2019 is an evaluation campaign that is being organized as part of the CLEF initiative labs. The campaign offers several research tasks that welcome participation from teams around the world. The results of the campaign appear in the working notes proceedings, published by CEUR Workshop Proceedings ( Selected contributions among the participants, will be invited for publication in the following year in the Springer Lecture Notes in Computer Science (LNCS) together with the annual lab overviews.

For the 2019 edition, ImageCLEF organises 4 main tasks with a global objective of promoting the evaluation of technologies for annotation, indexing and retrieval of visual data with the aim of providing information access to large collections of images in various usage scenarios and domains: lifelogging, medicine, nature, and security.

Target communities involve (but are not limited to): information retrieval (text, vision, audio, multimedia, social media, sensor data, etc.), machine learning, deep learning, data mining, natural language processing, image and video processing; with special attention to the challenges of multi-modality, multi-linguality, and interactive search.

Stay tuned with us for the latest information and updates by joining us on the ImageCLEF social media accounts: Twitter @imageclef, and Facebook (click on the link).

ImageCLEF schedule

Each of the tasks sets its own schedule, so please check the corresponding task webpage for specific dates. A (tentative) global schedule can be found below:

  • 12.11.2018: registration opens for all ImageCLEF tasks (until 26.04.2019)
  • 12.11.2018: development data release starts (depends on the task)
  • 18.03.2019: test data release starts (depends on the task)
  • 01.05.2019: deadline for submitting the participants runs (depends on the task)
  • 13.05.2019: release of the processed results by the task organizers (depends on the task)
  • 24.05.2019: deadline for submission of working notes papers by the participants
  • 07.06.2019: notification of acceptance of the working notes papers
  • 28.06.2019: camera ready working notes papers
  • 09-12.09.2019: CLEF 2019, Lugano, Switzerland

The CLEF Conference

CLEF 2019 CLEF Initiative

ImageCLEF lab and all its tasks are part of the Conference and Labs of the Evaluation Forum: CLEF 2019. CLEF 2019 consists of an independent peer-reviewed workshops on a broad range of challenges in the fields of multilingual and multimodal information access evaluation, and a set of benchmarking activities carried in various labs designed to test different aspects of mono and cross-language Information retrieval systems. More details about the conference can be found here. Also there is more information about the Clef Initiative.

Participant registration

CrowdAI is shutting down and will move towards AICrowd. Please temporarily ignore the information below this paragraph. During the transition phase (until all challenges are migrated) we will have to provide the datasets and End User Agreement (EUA) handling ourselves. If no information is available on the task page, please write an e-mail to the responsible people. For examples on how to fill in the EUAs, please have a look at the attached files at the bottom of this page.
  1. Each participant has to register on ( with username, email and password. A representative team name should be used
    as username.
  2. In order to be compliant with the CLEF requirements, participants also have to fill in the following additional fields on their profile:
    • First name
    • Last name
    • Affiliation
    • Address
    • City
    • Country
  3. Participants now have to access the dataset tab, where they find a download link to the task's End User Agreement (EUA). At the same place they will also be able to upload a filled in and signed EUA.

    Participants have to fill in and submit one EUA for each ImageCLEF task they want to participate in. An ImageCLEF participant is considered as registered for a task as soon as he/she has uploaded a valid EUA that was approved by an ImageCLEF organizer (examples of how the EUAs for the medical task should be filled in can be found here: Caption EUA example, Tuberculosis EUA example and VQA EUA example).

    Registrations are handled on a per-task basis. This means if a task has multiple challenges (subtasks), a participant can automatically access the data of all challenges in that task because there is one common dataset per task. We do not separate datasets on a per-challenge basis.

PS: You do not have to remember all of the steps mentioned above as you will be given instructions on what to do as soon as you try to access a challenge's dataset tab.

The Tasks

ImageCLEF 2019 proposes 4 main tasks:

  • ImageCLEFcoral: The increasing use of structure-from-motion photogrammetry for modelling large-scale environments from action cameras has driven the next generation of visualisation techniques. The task addresses the problem of automatically segmenting and labeling a collection of images of an underwater environment for the monitoring of coral reef structure and composition.
  • ImageCLEFlifelog: An increasingly wide range of personal devices, such as smartphones, video cameras as well as wearable devices that allow capturing pictures, videos, and audio clips for every moment of our lives, are becoming available. In this context, the task addresses the problems of lifelogging data retrieval and summarization.
  • ImageCLEFmedical: Medical images are used in a variety of scenarios. The task addresses the challenge of automatically predicting tuberculosis type from 3D chest CT scans and mapping of visual information to textual descriptions. The objective is to combine medical tasks into a common task with several subtasks to foster collaborations.
  • ImageCLEFsecurity: File Forgery Detection is a very serious problem concerning digital forensics examiners. Fraud or counterfeits are common causes for altering files. Steganography is the practice of concealing a file, message, image or video within another file, message, image, or video. The task addresses the problems of automatically identifying forged content and retrieve hidden information.

The Organising Committee

Overall coordination

  • Bogdan Ionescu <bionescu(at)>, University Politehnica of Bucharest, Romania
  • Henning Müller <henning.mueller(at)>, University of Applied Sciences Western Switzerland, Sierre, Switzerland
  • Renaud Péteri <renaud.peteri(at)>, University of La Rochelle, France

Technical support

  • Ivan Eggel <ivan.eggel(at)>, University of Applied Sciences Western Switzerland, Sierre, Switzerland
  • Mihai Dogariu <dogariu_mihai8(at)>, University Politehnica of Bucharest, Romania