You are here

ImageCLEF 2018


ImageCLEF 2018 is an evaluation campaign that is being organized as part of the CLEF initiative labs. The campaign offers several research tasks that welcome participation from teams around the world. The results of the campaign appear in the working notes proceedings, published by CEUR Workshop Proceedings ( Selected contributions among the participants, will be invited for publication in the following year in the Springer Lecture Notes in Computer Science (LNCS) together with the annual lab overviews.

For the 2018 edition, ImageCLEF organises 3 main tasks with a global objective of benchmarking lifelogging summarization and retrieval, bio-medical image concept detection and caption prediction and tuberculosis severity score prediction from CT images; and a pilot task on medical visual question answering.

Target communities of the tasks involve (but are not limited to): information retrieval (text, vision, audio, multimedia, social media, sensor data), machine learning (including deep learning), data mining, natural language processing, image/video processing, remote sensing, with special attention to the challenges of multi-modality, multi-linguality, and interactive search.

Download the ImageCLEF 2018 call for participation flyer here.

Stay tuned with us for the latest information and updates by joining us on the ImageCLEF social media accounts: Twitter @imageclef, and Facebook (click on the link).

ImageCLEF schedule

Each of the tasks sets its own schedule, so please check the corresponding task webpage for specific dates. A (tentative) global schedule can be found below:

  • 08.11.2017: registration opens for all ImageCLEF tasks (until 27.04.2018)
  • 08.11.2017: development data release starts (depends on the task)
  • 20.03.2018: test data release starts (depends on the task)
  • 01.05.2018: deadline for submitting the participants runs (depends on the task)
  • 15.05.2018: release of the processed results by the task organizers (depends on the task)
  • 31.05.2018: deadline for submission of working notes papers by the participants
  • 15.06.2018: notification of acceptance of the working notes papers
  • 29.06.2018: camera ready working notes papers
  • 10-14.09.2018: CLEF 2018, Avignon, France

The CLEF Conference

CLEF 2018 CLEF Initiative

ImageCLEF lab and all its tasks are part of the Conference and Labs of the Evaluation Forum: CLEF 2018. CLEF 2018 consists of an independent peer-reviewed workshops on a broad range of challenges in the fields of multilingual and multimodal information access evaluation, and a set of benchmarking activities carried in various labs designed to test different aspects of mono and cross-language Information retrieval systems. More details about the conference can be found here. Also there is more information about the Clef Initiative.

Programme of ImageCLEF at the CLEF 2017 Conference

Information will be posted closer to the conference.

Participant registration

  1. Each participant has to register on ( with username, email and password. A representative team name should be used
    as username.
  2. In order to be compliant with the CLEF requirements, participants also have to fill in the following additional fields on their profile:
    • First name
    • Last name
    • Affiliation
    • Address
    • City
    • Country
  3. Participants now have to access the dataset tab, where they find a download link to the task's End User Agreement (EUA). At the same place they will also be able to upload a filled in and signed EUA.

    Participants have to fill in and submit one EUA for each ImageCLEF task they want to participate in. An ImageCLEF participant is considered as registered for a task as soon as he/she has uploaded a valid EUA that was approved by an ImageCLEF organizer.

    Registrations are handled one a per-task basis. This means if a task has multiple challenges (subtasks), a participant can automatically access the data of all challenges in that task because there is one common dataset per task. We do not separate datasets on a per-challenge basis.

PS: You do not have to remember all of the steps mentioned above as you will be given instructions on what to do as soon as you try to accessa challenge's dataset tab.

The Tasks

ImageCLEF 2017 proposes 3 main tasks and a pilot task:

  • ImageCLEFlifelog: An increasingly wide range of personal devices, such as smartphones, video cameras as well as wearable devices that allow capturing pictures, videos, and audio clips in every moment of our life are becoming available. Considering the huge volume of data created, there is a need for systems that can automatically analyse the data in order to categorize, summarize and also query to retrieve the information the user may need. The task addresses the problems of lifelogging data understanding, summarization and retrieval.
  • ImageCLEFcaption: Interpreting and summarizing the insights gained from medical images such as radiology output is a time-consuming task that involves highly trained experts and often represents a bottleneck in clinical diagnosis pipelines. Consequently, there is a considerable need for automatic methods that can approximate this mapping from visual information to condensed textual descriptions. The task addresses the problem of bio-medical image concept detection and caption prediction from large amounts of training data.
  • ImageCLEFtuberculosis: The main objective of the task is to provide tuberculosis severity score based on the automatic analysis of lung CT images of patients. Being able to extract these information from the image data alone, can allow to limit lung washing and laboratory analyses to determine the tuberculosis type and drug resistances. This can lead to quicker decisions on the best treatment strategy, reduced use of antibiotics and lower impact on the patient.
  • ImageCLEF-VQA-Med (pilot task): Visual Question Answering is a new and exciting problem that combines natural language processing and computer vision techniques. With the ongoing drive for improved patient engagement and access to the electronic medical records via patient portals, patients can now review structured and unstructured data from labs and images to text reports associated with their healthcare utilization. Such access can help them better understand their conditions in line with the details received from their healthcare provider. Given a medical image accompanied with a set of clinically relevant questions, participating systems are tasked with answering the questions based on the visual image content.

The Organising Committee

Overall coordination

  • Bogdan Ionescu <bionescu(at)>, University Politehnica of Bucharest, Romania
  • Mauricio Villegas <mauricio(at)>, SearchInk, Germany
  • Henning Müller <henning.mueller(at)>, University of Applied Sciences Western Switzerland, Sierre, Switzerland

Technical support

  • Ivan Eggel <ivan.eggel(at)>, University of Applied Sciences Western Switzerland, Sierre, Switzerland
  • Mihai Dogariu <dogariu_mihai8(at)>, University Politehnica of Bucharest, Romania


  • Duc-Tien Dang-Nguyen <duc-tien.dang-nguyen(at)>, Dublin City University, Ireland
  • Luca Piras <luca.piras(at)>, University of Cagliari, Cagliari, Italy
  • Michael Riegler <michael(at)>, University of Oslo, Norway
  • Liting Zhou <zhou.liting2(at)>, Dublin City University, Ireland
  • Mathias Lux <mlux(at)>, Klagenfurt University, Austria
  • Cathal Gurrin <cgurrin(at)>, Dublin City University, Ireland



  • Vassili Kovalev <vassili.kovalev(at)>, Institute for Informatics, Minsk, Belarus
  • Henning Müller <henning.mueller(at)>, University of Applied Sciences Western Switzerland, Sierre, Switzerland
  • Vitali Liauchuk <vitali.liauchuk(at)>, Institute for Informatics, Minsk, Belarus
  • Yashin Dicente Cid <yashin.dicente(at)>, University of Applied Sciences Western Switzerland, Sierre, Switzerland


  • Sadid Hasan <sadid.hasan(at)>, Philips Research Cambridge, USA
  • Yuan Ling <yuan.ling(at)>, Philips Research Cambridge, USA
  • Oladimeji Farri <dimeji.farri(at)>, Philips Research Cambridge, USA
  • Joey Liu <joey.liu(at)>, Philips Research Cambridge, USA
  • Henning Müller <henning.mueller(at)>, University of Applied Sciences Western Switzerland, Sierre, Switzerland
  • Matthew Lungren <mlungren(at)>, Stanford University Medical Center, USA
PDF icon ImageCLEF2018flyer.pdf234.84 KB