You are here

ImageCLEFmed MEDIQA-MAGIC

Motivation

The rapid development of telecommunication technologies, the increased demands for healthcare services, and recent pandemic needs, have accelerated the adoption of remote clinical diagnosis and treatment. In addition to live meetings with doctors which may be conducted through telephone or video, asynchronous options such as e-visits, emails, and messaging chats have also been proven to be cost-effective and convenient.

In this task, we focus on the problem of Multimodal And Generative TelemedICine (MAGIC) in the area of dermatology. Inputs will include text which give clinical context and queries, as well as one or more images. The challenge will tackle the generation an appropriate textual response to the query.

Consumer health question answering has been the subject of past challenges and research; however, these prior works only focus on text [1]. Previous work on visual question answering have focused mainly on radiology images and did not include additional clinical text input [2]. Also, while there is much work on dermatology image classification, much prior work is related to lesion malignancy classification for dermatoscope images[3].

To the best of our knowledge, this is the first challenge and study of a problem that seeks to automatically generate clinical responses, given textual clinical history, as well as user generated images and queries.

[1] Overview of the MEDIQA 2019 shared task on textual inference, question entailment and question answering. Asma Ben Abacha, Chaitanya Shivade, Dina Demner-Fushman. https://aclanthology.org/W19-5039/

[2] Vqa-med: Overview of the medical visual question answering task at imageclef 2019. Asma Ben Abacha , Sadid A. Hasan , Vivek V. Datla , Joey Liu , Dina Demner-Fushman , and Henning M¨uller. https://www.semanticscholar.org/paper/VQA-Med%3A-Overview-of-the-Medical...

[3] Artificial Intelligence in Dermatology Image Analysis: Current Developments and Future Trends. Zhouxiao Li, Konstantin Christoph Koban, Thilo Ludwig Schenck, Riccardo Enzo Giunta, Qingfeng Li, and Yangbai Sun. https://pubmed.ncbi.nlm.nih.gov/36431301/

Task Description

Participants will be given textual inputs which may include clinical history and a query, along with an associated images. The task is to generate an appropriate textual response.

Data

Input Content

(a) json list where each instance will be represented by a json object with the following attributes:

attribute_id description
encounter_id unique identification string for the case
author_id unique identification string for the author
image_ids list of strings of the image_id’s
query_title a string representing the query title
query_content a string representing the query content

(b) image files with unique id’s

(c) Reference data will additionally have the field:

attribute_id description
responses a list of json objects with the following keys ( “response_author_id”, “response_content_en” )

Output Content

Output should be json list with at least the following content

attribute_id description
encounter_id unique identification string for the case
response_en response for the English test

You are not required to participate in all language evaluations. On submission, you will be able to specify which evaluations you are participating in.

Evaluation methodology

Evaluation will be using evaluation metrics ROUGE and BLEU. Scoring will be based on ROUGE-1.

Participant registration

Please refer to the general ImageCLEF registration instructions

Schedule

  • Train/Validation Release: March 20
  • Test Set Release: May 3
  • Submission Deadline: May 10
  • Working Notes Deadline: June 5

Submission Instructions

Submissions will be through the AI4MediaBench platform: https://ai4media-bench.aimultimedialab.ro/competitions/20/
You will have 10 maximum allowed runs.

Results

CEUR Working Notes

Please follow the instructions below to prepare and submit your papers:

1. Each participating team can submit one paper describing the developed systems for one or more tasks.
2. The paper should be titled: TEAM at MEDIQA-MAGIC 2024: Sub-Title.
3. Submissions may have a maximum length of eight (8) pages with unlimited pages for references and appendices.
4. Submissions must be made through EasyChair: https://www.easychair.org/my/conference?conf=clef2024 - Select the following track: Multimedia Retrieval Challenge in CLEF (ImageCLEF)
Full Instructions: https://drive.google.com/file/d/1EK4PHw3OpEPwRG488RQDrKqiONlUNhWH/view?u...
5. The working notes are prepared using the CEUR-WS templates
6. Templates available at: https://drive.google.com/file/d/1KPW1f5DBqZ7kX3CnMXgEQ9gSSSAul1KG/view?u...
7. Submissions must include: clear scientific writing, a short overview of the related work, a full description of the developed models and their originality/specificity, description of additional data/models that were used, and a discussion of the results. It is important to describe the details well enough that the system could be replicated. You can report additional experiments and results in your papers, but please make it clear that they were obtained after the competition.
8. MEDIQA-MAGIC will follow a double-blind review process, papers must not include authors' names and affiliations. Reporting your team name, rank, and results is allowed and will not be considered as disclosure of identity.
9. We would expect that most papers will be included in the proceedings.
10. Teams with accepted papers will be invited to present their work at the CLEF conference either as presentations or posters.
11. Paper selection for oral presentations will be based on the following criteria: approach novelty, research insights, and obtained results.
12. Short description: Please send us an email at mediqa.organizers@gmail.com by May 20 with a short description of your methods and systems and any additional information that you want to be included in the overview paper with your paper citations, such as your code or model link(s).
13. Code: Publishing your code is encouraged but not required for your paper acceptance. If you do choose to share your code publicly, please exclude publishing of the dataset.
Citations: Please cite the following papers when referring to the MEDIQA-MAGIC tasks and datasets, and results:

Shared Task paper:
@Inproceedings{MEDIQA-Magic2024,
author = {Wen{-}wai Yim and Asma {Ben Abacha} and Yujuan Fu and Zhaoyi Sun and Meliha Yetisgen and Fei Xia},
title = {Overview of the MEDIQA-MAGIC Task at ImageCLEF 2024: Multimodal And Generative TelemedICine in Dermatology},
booktitle = {CLEF 2024 Working Notes},
series = {{CEUR} Workshop Proceedings},
year = {2024},
publisher = {CEUR-WS.org},
month = {September 9-12},
address = {Grenoble, France}
}

Dataset paper:
@article{mediqa-m3g-dataset,
author = {Wen{-}wai Yim and Yujuan Fu and Zhaoyi Sun and Asma {Ben Abacha} and Meliha Yetisgen and Fei Xia},
title = {DermaVQA: A Multilingual Visual Question Answering Dataset for Dermatology},
journal = {CoRR},
eprinttype = {arXiv},
year = {2024}}

Citations

When referring to ImageCLEF 2024, please cite the following publication:

Organizers & Contact

Organizers:

  • Asma Ben Abacha, Microsoft
  • Wen-wai Yim, Microsoft
  • Meliha Yetisgen, University of Washington
  • Fei Xia, University of Washington

For more information:

AttachmentSize
Image icon MAGIC-CARE.png37.09 KB