You are here

ImageCLEF 2026

News

Registration for all ImageCLEF tasks started!

Motivation

ImageCLEF 2026 is organized as part of the CLEF Initiative Labs.

The target audience for ImageCLEF 2026 is mainly expected to be from the multimodal data annotation and retrieval community, from fields such as computer vision, image information retrieval and digital image processing. Due to the success and the specific nature of the medical tasks, a significant part of the audience will come from the medical informatics, machine learning and pattern recognition community.

Stay tuned with us for the latest information and updates by joining us on the ImageCLEF social media accounts: Twitter #imageclef, and Facebook @ImageClef.

ImageCLEF2026 schedule

Each of the tasks sets its own schedule, so please check the corresponding task webpage for specific dates. A (tentative) global schedule can be found below:

  • 26.01.2026: Registration opens for all ImageCLEF tasks
  • 02.02.2026: Development dataset released (depends on task)
  • 09.03.2026: Test dataset released (depends on task)
  • 23.04.2026: Registration closes for all ImageCLEF tasks
  • 07.05.2026: Deadline for submitting participant runs
  • 14.05.2026: Release of the processed results by the task organizers (depends on task)
  • 28.05.2026: Submission of participant papers [CEUR-WS]
  • 30.06.2026: Notification of acceptance
  • 21.09.2026: CLEF 2026, Jena, Germany

The CLEF Conference

The CLEF 2026 conference will be hosted by Friedrich-Schiller-Universität Jena, Germany from September 21-24, 2026.
ImageCLEF lab and all its tasks are part of the Conference and Labs of the Evaluation Forum: CLEF 2026. CLEF 2026 will be hosted by the Friedrich-Schiller-Universität Jena, Germany, 21-24 September 2026 and consists of an independent peer-reviewed workshops on a broad range of challenges in the fields of multilingual and multimodal information access evaluation, and a set of benchmarking activities carried in various labs designed to test different aspects of mono and cross-language information retrieval systems. More details about the conference can be found here. Also there is more information about the CLEF Initiative.

Programme of ImageCLEF at the CLEF 2026 Conference

Participant registration

ImageCLEF is an evaluation lab and starting from this edition it will be managed by the ImageCLEF team through the AI4Media benchmarking platform (based on Codalab). The main features of the platform are: online registration system, end-user agreement submission, data distribution, a leaderboard system as well as interaction with participants, reducing the administrative overhead. The platform is now up for participants' registration.

https://ai4media-bench.aimultimedialab.ro/competitions/public/

Instructions for registration:

  • Create an account on the AI4MediaBench benchmarking platform.
  • Complete the Terms and Conditions form, available on any task page under
    'Get Started → Terms'. This form is mandatory. Within the form, you may select
    one or more ImageCLEF tasks from the provided list. Once submitted, your application
    will apply to all selected tasks hosted on AI4MediaBench.
    Please ensure you use the same email address for both the platform account and the form.
  • Apply to each selected task via the 'Participate' tab on the corresponding task pages.
  • After the form is validated, you will receive a confirmation email and gain access to
    all tasks you have registered for.
Please do not register separately on the official CLEF 2026 registration page. Registrations on the AI4Media Bench Platform will automatically be transmitted to CLEF.
In case the sub(task) is not present on the AI4Media Bench Platform, please contact the task responsibles (listed on the bottom of each task page).

The Tasks

ImageCLEF 2025 proposes 5 main tasks:

(10th edition) ImageCLEFmedical: Multimodal data can be used in different scenarios. For example, manual generation of the knowledge of medical images is a time-consuming process prone to human error. As this process requires assistance for the better and easier diagnoses of diseases that are susceptible to radiology screening, it is important that we better understand and refine automatic systems that aid in the broad task of radiology-image metadata generation. Thus, we proposed:

  • (10th edition) Automatic Image Captioning challenges participants to detect radiological concepts, generate coherent image-level captions, and provide human-interpretable explanations for those captions in medical images.
  • (4th edition) Synthetic Medical Images Created via GANs: investigates privacy risks in medical image generation by challenging participants to detect training data usage, analyze latent-space leakage, and generate realistic yet privacy-preserving CT images from generative models.
  • (4th edition) Visual Question Answering: The MEDVQA-GI 2026 challenge advances clinically grounded visual question answering for GI endoscopy by evaluating accurate diagnosis-oriented answers together with safe, explainable, and medically justified multimodal reasoning.
  • (1st edition) MEDIQA-CORE: Multimodal Reasoning and Clinical Reconciliation in Radiology challenges participants to integrate multimodal radiology and pathology data to correct brain tumor classifications and to detect and summarize clinically meaningful discrepancies between conflicting radiology reports and images.

(3rd edition) ImageCLEFtoPicto: This task is designed for individuals with language impairments who use pictograms as a communication aid. The main usage scenario involves converting text into a meaningful sequence of pictograms, facilitating communication between verbal individuals and AAC users and predict the next pictogram in a sequence.

(2nd edition) ImageCLEF MultimodalReasoning: Vision-Language Models (VLMs) excel in tasks combining vision and language, like image captioning and basic visual question answering. However, they often falter in deep logical reasoning and handling complex dependencies or hypothetical scenarios. This task aims to evaluate modern LLMs' reasoning abilities on intricate, multilingual inputs across diverse subjects.

(1st edition) Deepfake Detection and Generation: challenges participants to both generate and detect audio and image deepfakes, jointly evaluating how realistic synthetic media can be made and how robust detection systems are against increasingly sophisticated forgeries.

(1st edition) AI4Agriculture: (1) challenges participants to estimate viticultural agricultural potential on an ordinal scale from multi-temporal, multi-spectral Sentinel-2 satellite image time series, enabling pre-planting land suitability assessment through scalable remote sensing methods and (2) focuses on advancing agricultural monitoring through remote sensing by leveraging advanced satellite data.

Overview Paper

The Organising Committee

Overall coordination

  • Bogdan Ionescu <bogdan.ionescu(at)upb.ro>, National University of Science and Technology Politehnica Bucharest, Romania
  • Henning Müller <henning.mueller(at)hevs.ch>, University of Applied Sciences Western Switzerland, Sierre, Switzerland
  • Cristian Stanciu <dan.stanciu1203(at)upb.ro>, National University of Science and Technology Politehnica Bucharest, Romania

Technical support

  • Ivan Eggel <ivan.eggel(at)hevs.ch>, University of Applied Sciences Western Switzerland, Sierre, Switzerland
  • Liviu-Daniel Ștefan <liviu_daniel.stefan(at)upb.ro>, National University of Science and Technology Politehnica Bucharest, Romania