You are here

ImageCLEFmed MEDIQA-Sum

Motivation

Clinical notes are documents that are routinely created by clinicians after every patient encounter. They are used to record a patient's health conditions as well as past or planned tests and treatments. These notes not only serve as documentation of discussions, they also provide a snapshot of patient condition to the entire care team, and act as primary sources used for legal and billing. However note writing is very time consuming and costly.

With recent advances in speech-to-text, there has been an unprecedented interest in the area of automatic note creation from medical conversations. This task encompasses spoken language understanding and clinical note generation with distinct challenges of bridging colloquial conversation and formal medical prose.

MEDIQA-Sum 2023 tackles the automatic generation of clinical notes summarizing clinician-patient encounter conversations through three subtasks. We plan to extend the task in future editions to include multiple modalities such as medical images in addition to the input conversation to support healthcare providers with reliable solutions for clinical note generation.

We are also organizing the MEDIQA-Chat 2023 shared tasks on the generation and summarization of medical conversations at ACL-ClinicalNLP 2023. For more information: https://sites.google.com/view/mediqa2023

News

  • 10/21/2022: Website goes live.
  • 01/12/2023: Team Registration opens.

Task Description

The MEDIQA-Sum task focuses on the automatic summarization and classification of doctor-patient conversations through three subtasks:

  • Subtask A - Dialogue2Topic Classification.  Given a conversation snippet between a doctor and patient, participants are tasked with identifying the topic (associated section header). Topics/Section headers will be one of twenty normalized common section labels (e.g. Assessment, Diagnosis, Exam, Medications, Past Medical History).
  • Subtask B - Dialogue2Note Summarization. Given a conversation snippet between a doctor and patient and a section header, participants are tasked with producing a clinical note section text summarizing the conversation.
  • Subtask C - Full-Encounter Dialogue2Note Summarization. Given a full encounter conversation between a doctor and patient, participants are tasked with producing a full clinical note summarizing the conversation.

Data

New datasets have been created for the MEDIQA-Sum subtasks. The subtasks A and B dataset was created based on clinical notes and corresponding conversations written by domain experts. The subtask C dataset consists of full doctor-patient encounters and corresponding notes written by medical scribes.

Evaluation methodology

The subtasks B and C will use language generation evaluation metrics including ROUGE, BERTScore, and BLEURT for their high correlations with human evaluation. In subtask A, we will also measure section header prediction with standard classification metrics such as F1 and accuracy.

Participant registration

A team can be registered by a representative member by completing the ImageCLEF Registration Form (EUA is not required for the MEDIQA tasks).

Please refer to the general ImageCLEF registration instructions.

Tentative Schedule

  • 20 March 2023: Release of the training and validation sets
  • 3 May 2023: Release of the test set for subtask A
  • 5 May 2023: Run submission deadline for subtask A
  • 8 May 2023: Release of the test sets for subtasks B & C (with Ground truth of subtask A)
  • 10 May 2023: Run submission deadline for subtasks B & C
  • 17 May 2023: Release of the processed results by the task organizers
  • 5 June 2023: Deadline for submitting working notes papers by the participants
  • 23 June 2023: Notification of acceptance of the working notes papers
  • 7 July 2023: Camera ready copy of participant papers
  • 18-21 September 2023: CLEF 2023, Thessaloniki, Greece

Submission Instructions

GitHub Repo: https://github.com/ImageCLEF/2023_ImageCLEFmed_Mediqa

CEUR Working Notes

Submission Link: https://easychair.org/my/conference?conf=clef2023
Select the following track: Multimedia Retrieval Challenge in CLEF (ImageCLEF)

Citations

* Task Overview Paper (when referring to the MEDIQA-Sum subtasks, datasets, and results ): 

@Inproceedings{MEDIQA-Sum2023,
author = {Wen{-}wai Yim and Asma {Ben Abacha} and Neal Snider  and Griffin Adams  and Meliha Yetisgen},
title = {Overview of the MEDIQA-Sum Task at ImageCLEF 2023: Summarization and Classification of Doctor-Patient Conversations},
booktitle = {CLEF 2023 Working Notes},
series = {{CEUR} Workshop Proceedings},
year = {2023},
publisher = {CEUR-WS.org},
month = {September 18-21},
address = {Thessaloniki, Greece}
}

* Lab Overview Paper (when referring to the ImageClef 2023 tasks) 
@inproceedings{ImageCLEF2023,
author = {Bogdan Ionescu, Henning M\"uller, Ana{-}Maria Dr\u{a}gulinescu, Wen{-}wai Yim, Asma {Ben Abacha}, Neal Snider, Griffin Adams, Meliha Yetisgen, Johannes R\"uckert, Alba {Garc\’{\i}a Seco de Herrera}, Christoph M. Friedrich, Louise Bloch, Raphael Br\"ungel, Ahmad Idrissi{-}Yaghir, Henning Sch\"afer, Steven A. Hicks, Michael A. Riegler, Vajira Thambawita, Andrea Storås, Pål Halvorsen, Nikolaos Papachrysos, Johanna Schöler, Debesh Jha, Alexandra{-}Georgiana Andrei, Ahmedkhan Radzhabov, Ioan Coman, Vassili Kovalev, Alexandru Stan, George Ioannidis, Hugo Manguinhas, Liviu{-}Daniel \c{S}tefan, Mihai Gabriel Constantin, Mihai Dogariu, J\'er\^ome Deshayes, Adrian Popescu}
title = {{Overview of ImageCLEF 2023}: Multimedia Retrieval in Medical, SocialMedia and Recommender Systems Applications},
booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction},
series = {Proceedings of the 14th International Conference of the CLEF Association (CLEF 2023)},
year = {2023},
publisher = {Springer Lecture Notes in Computer Science LNCS},
pages = {},
month = {September 18-21},
address = {Thessaloniki, Greece}
}

Contact

Organizers:

  • Wen-wai Yim <yimwenwai(at)microsoft.com>, Microsoft, USA
  • Asma Ben Abacha  <abenabacha(at)microsoft.com>, Microsoft, USA
  • Neal Snider <neal.snider(at)nuance.com>, Microsoft/Nuance, USA
  • Griffin Adams <griffin.adams(at)columbia.edu>, Columbia University, USA
  • Meliha Yetisgen <melihay(at)uw.edu>, University of Washington, USA