You are here

ImageCLEFmed GANs


Welcome to the second edition of the GANs Task!

This is the second edition of the challenge in the ImageCLEFmedical track.

The first task is focused on examining the existing hypothesis that GANs are generating medical images that contain certain "fingerprints" of the real images used for generative network training. If the hypothesis is correct, artificial biomedical images may be subject to the same sharing and usage limitations as real sensitive medical data. On the other hand, if the hypothesis is wrong, various generative networks may be potentially used to create rich datasets of biomedical images that are free of ethical and privacy regulations. The participants will test the hypothesis on two different levels including the identification of the source dataset used for training as well as trying to explore the problem of detection and, possibly, isolation of image regions in generated images that inherit the patterns presented in the original ones.

In addition to the previous edition, this edition also includes the study of generative models' "fingerprints. The second task is focusing on investigating the hypothesis that generative models imprint distinctive "fingerprints" onto the generated images.

Similar to the previous year, it is supposed that the 2D gray-scale images being provided will be depicting the axial slices of CT scans of tuberculosis patients taken at different stages of their treatment. Nowadays, there are quite a few image generation methods available.


Task Description

Task 1. Identify training data “fingerprints”.

We will continue to investigate the hypothesis that generative models are generating medical images that are in some way similar to the ones used for the GAN training. The task addresses the security and privacy concerns related to personal medical image data in the context of generating and using artificial images in different real-life scenarios.

The objective of the task is to detect “fingerprints” within the synthetic biomedical image data to determine which real images were used in training to produce the generated images. The task supposes performing analysis of test image datasets and assessment of the probability with which certain images of real patients were used for training image generators and which were not.

Note that identification of artificial images or classification image datasets to the real and artificial ones is NOT assumed.

Task 2. Detect generative models’ “fingerprints”.

Explore the hypothesis that generative models leave/imprint unique fingerprints on generated images. The focus is on understanding whether different generative models or architectures leave discernible signatures within the synthetic images they produce.

By providing a set of synthetic images generated through various generative models, the objective is to identify and detect the distinct "fingerprints" associated with each model. This task supposes analyzing the characteristics, patterns, or features embedded in the synthetic images. .

The goal is not only to distinguish between images created by different models but also to uncover the specific traits that define each model's output. This investigation contributes to a deeper understanding of the unique imprint left by generative models on the images they generate, allowing model attribution recognition.

Note that this is a clustering problem and the number of clusters from the training and development dataset may vary from the clusters in the testing dataset


The benchmarking image data are the axial slices of 3D CT images of about 8000 lung tuberculosis patients. This particularly means that some of them may appear pretty “normal” whereas the others may contain certain lung lesions including the severe ones. These images are stored in the form of 8 bit/pixel PNG images with dimensions of 256x256 pixels.

The artificial slice images are 256x256 pixels in size and are obtained using different generative models (Generative Adversarial Networks, Diffuse Neural Networks).

The development and test data are available here:
Task 1 -
Task 2 -

Evaluation methodology

For assessing the performance of Task 1,the following metrics will be used: F1 score, Accuracy, Recall. The F1-score is the official metric of the sub-task.

For Task 2, Adjusted Rand Index will be used.

Participant registration

Please refer to the general ImageCLEF registration instructions
Registration is done for each task separately on the AI4MediaBench platform:

- Ensure that you fill the registration form in the Terms page.
- The email you use in the registration form should match the one you used for registering on the AI4MediaBench platform.


  • 30.11.2023: registration opens for all ImageCLEF tasks
  • 22.04.2024: registration closes for all ImageCLEF tasks
  • 01.03.2024: development data release starts
  • 01.04.2024: test data release starts
  • 12.05.2024 : deadline for submitting the participants runs (depends on the task)
  • 13.05.2024 : release of the processed results by the task organizers (depends on the task)
  • 31.05.2024 : deadline for submission of working notes papers by the participants
  • 21.06.2024: notification of acceptance of the working notes papers
  • 08.07.2024 : camera ready working notes papers
  • 09-12.09.2024: CLEF 2024, Grenoble, France

Submission Instructions

To be added soon.


CEUR Working Notes

Instructions for CLEF 2024 Working notes in the CEUR-WS proceedings are available here .
A summary of the most important points:
• Every participating team that submits at least one run, regardless of the score, should submit a CEUR working notes paper.
• Teams who participated in both tasks should generally submit only one report
• Submission of reports is done through EasyChair – please make absolutely sure that the author (names and order), title, and affiliation information you provide in EasyChair match the submitted PDF exactly!
• Strict deadline for Working Notes Papers: 31 May 2024 (23:59 CEST)
• Strict deadline for CEUR-WS Camera Ready Working Notes Papers: 08 July 2024 (23:59 CEST)
• Templates are available here
• Working Notes Papers should cite both the ImageCLEF 2024 overview paper as well as the ImageCLEFmedical task overview paper, citation information is available in the Citations section below.


When referring to ImageCLEF 2024, please cite the following publication:

Bogdan Ionescu, Henning Müller, Ana-Maria Drăgulinescu, Johannes Rückert, Asma Ben Abacha, Alba G. Seco de Herrera, Louise Bloch, Raphael Brüngel, Ahmad Idrissi-Yaghir, Henning Schäfer, Cynthia Sabrina Schmidt, Tabea M.G. Pakull, Hendrik Damm, Benjamin Bracke, Christoph M. Friedrich, Alexandra-Georgiana Andrei, Yuri Prokopchuk, Dzmitry Karpenka, Ahmedkhan Radzhabov, Vassili Kovalev, Cécile Macaire, Didier Schwab, Benjamin Lecouteux, Emmanuelle Esperança-Rodier, Wen-wai Yim, Yujuan Fu, Zhaoyi Sun, Meliha Yetisgen, Fei Xia, Steven A. Hicks, Michael A. Riegler, Vajira Thambawita, Andrea Storås, Pål Halvorsen, Maximilian Heinrich, Johannes Kiesel, Martin Potthast, Benno Stein, Overview of the ImageCLEF 2024: Multimedia Retrieval in Medical Applications, in Experimental IR Meets
Multilinguality, Multimodality, and Interaction. Proceedings of the 15th International Conference of the CLEF Association (CLEF 2024), Springer Lecture Notes in Computer Science LNCS, Grenoble, France, 9-12 September, 2024.

author = {Bogdan Ionescu and Henning M\"uller and Ana{-}Maria Dr\u{a}gulinescu and Johannes R\"uckert and Asma {Ben Abacha} and Alba {Garc\’{\i}a Seco de Herrera} and Louise Bloch and Raphael Br\"ungel and Ahmad Idrissi{-}Yaghir and Henning Sch\"afer and Cynthia Sabrina Schmidt and Tabea M.G. Pakull and Hendrik Damm and Benjamin Bracke and
Christoph M. Friedrich and Alexandra{-}Georgiana Andrei and Yuri Prokopchuk and Dzmitry Karpenka and Ahmedkhan Radzhabov and Vassili Kovalev and C\'ecile Macaire and Didier Schwab and Benjamin Lecouteux and Emmanuelle Esperan\c{c}a{-}Rodier and Wen{-}wai Yim and Yujuan Fu and Zhaoyi Sun and Meliha Yetisgen and Fei Xia and Steven A. Hicks and
Michael A. Riegler and Vajira Thambawita and Andrea Stor\r{a}s and P\r{a}l Halvorsen and Maximilian Heinrich and Johannes Kiesel and Martin Potthast and Benno Stein},
title = {{Overview of ImageCLEF 2024}: Multimedia Retrieval in Medical Applications},
booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction},
series = {Proceedings of the 15th International Conference of the CLEF Association (CLEF 2024)},
year = {2024},
publisher = {Springer Lecture Notes in Computer Science LNCS},
pages = {},
month = {September 9-12},
address = {Grenoble, France}

When referring to ImageCLEFmedical 2024 GANs, please cite the following publication:

Alexandra-Georgiana Andrei, Ahmedkhan Radzhabov, Dzmitry Karpenka, Yuri Prokopchuk, Vassili Kovalev, Bogdan Ionescu, Henning Müller. Overview of 2024 ImageCLEFmedical GANs Task – Investigating Generative Models’ Impact on Biomedical Synthetic Images, in Experimental IR Meets Multilinguality, Multimodality, and Interaction. CEUR Workshop Proceedings (, Grenoble, France, September 9-12, 2024.

author = {Alexandra{-}Georgiana Andrei and Ahmedkhan Radzhabov and Dzmitry Karpenka and Yuri Prokopchuk and Vassili Kovalev and Bogdan Ionescu and Henning M\"uller},
title = { Overview of 2024 {ImageCLEFmedical GANs Task} -- {Investigating Generative Models’ Impact on Biomedical Synthetic Images}},
booktitle = {CLEF2024 Working Notes},
series = {{CEUR} Workshop Proceedings},
year = {2024},
volume = {},
publisher = {},
pages = {},
month = {September 9-12},
address = {Grenoble, France}



  • Alexandra Andrei <alexandra.andrei(at)>, Politehnica University of Bucharest, Romania
  • Ahmedkhan Radzhabov <axmegxah(at)>, Belarus State University, Minsk, Belarus
  • Vassili Kovalev <vassili.kovalev(at)>, Belarusian Academy of Sciences, Minsk, Belarus
  • Bogdan Ionescu <bogdan.ionescu(at)>, Politehnica University of Bucharest, Romania
  • Henning Müller <henning.mueller(at)>, University of Applied Sciences Western Switzerland, Sierre, Switzerland


Image icon gan_v2.png272.48 KB