You are here

ImageCLEFmed MEDIQA-Sum

Motivation

Clinical notes are documents that are routinely created by clinicians after every patient encounter. They are used to record a patient's health conditions as well as past or planned tests and treatments. These notes not only serve as documentation of discussions, they also provide a snapshot of patient condition to the entire care team, and act as primary sources used for legal and billing. However note writing is very time consuming and costly.

With recent advances in speech-to-text, there has been an unprecedented interest in the area of automatic note creation from medical conversations. This task encompasses spoken language understanding and clinical note generation with distinct challenges of bridging colloquial conversation and formal medical prose.

MEDIQA-Sum 2023 tackles the automatic generation of clinical notes summarizing clinician-patient encounter conversations through three subtasks. We plan to extend the task in future editions to include multiple modalities such as medical images in addition to the input conversation to support healthcare providers with reliable solutions for clinical note generation.

We are also organizing the MEDIQA-Chat 2023 shared tasks on the generation and summarization of medical conversations at ACL-ClinicalNLP 2023. For more information: https://sites.google.com/view/mediqa2023

News

  • 10/21/2022: Website goes live.
  • 01/12/2023: Team Registration opens.

Task Description

The MEDIQA-Sum task focuses on the automatic summarization and classification of doctor-patient conversations through three subtasks:

  • Subtask A - Dialogue2Topic Classification.  Given a conversation snippet between a doctor and patient, participants are tasked with identifying the topic (associated section header). Topics/Section headers will be one of twenty normalized common section labels (e.g. Assessment, Diagnosis, Exam, Medications, Past Medical History).
  • Subtask B - Dialogue2Note Summarization. Given a conversation snippet between a doctor and patient and a section header, participants are tasked with producing a clinical note section text summarizing the conversation.
  • Subtask C - Full-Encounter Dialogue2Note Summarization. Given a full encounter conversation between a doctor and patient, participants are tasked with producing a full clinical note summarizing the conversation.

Data

New datasets have been created for the MEDIQA-Sum subtasks. The subtasks A and B dataset was created based on clinical notes and corresponding conversations written by domain experts. The subtask C dataset consists of full doctor-patient encounters and corresponding notes written by medical scribes.

Evaluation methodology

The subtasks B and C will use language generation evaluation metrics including ROUGE, BERTScore, and BLEURT for their high correlations with human evaluation. In subtask A, we will also measure section header prediction with standard classification metrics such as F1 and accuracy.

Participant registration

A team can be registered by a representative member by completing the ImageCLEF Registration Form (EUA is not required for the MEDIQA tasks).

Please refer to the general ImageCLEF registration instructions.

Once the registration form is completed and approved, you will have access to the GitHub repositories used for data release and run submission.

Tentative Schedule

  • 1 March 2023: Release of the training and validation sets
  • 26 April 2023: Release of the test sets
  • 28 April 2023: Deadline for submitting the participant runs
  • 17 May 2023: Release of the processed results by the task organizers
  • 5 June 2023: Deadline for submitting working notes papers by the participants
  • 23 June 2023: Notification of acceptance of the working notes papers
  • 7 July 2023: Camera ready copy of participant papers
  • 18-21 September 2023: CLEF 2023, Thessaloniki, Greece

Submission Instructions

Will be added soon.

CEUR Working Notes

Will be added soon.

Citations

Will be added soon.

Contact

Organizers:

  • Wen-wai Yim <yimwenwai(at)microsoft.com>, Microsoft, USA
  • Asma Ben Abacha  <abenabacha(at)microsoft.com>, Microsoft, USA
  • Neal Snider <neal.snider(at)nuance.com>, Microsoft/Nuance, USA
  • Griffin Adams <griffin.adams(at)columbia.edu>, Columbia University, USA
  • Meliha Yetisgen <melihay(at)uw.edu>, University of Washington, USA