ICCV CVAMD 2023 Shared Task

CXR-LT: Multi-Label Long-Tailed Classification on Chest X-Rays

Location: Paris, France
Time: TBD, October 2-3, 2023

Click here to participate: https://codalab.lisn.upsaclay.fr/competitions/12599

Submission Instructions

Challenge participants who (i) made at least one submission during the test phase and (ii) submitted reproducible code are encouraged to write up their solutions to be presented at ICCV CVAMD 2023! Please fully describe your methodology and results in the format of either a 4-page extended abstract or 8-page long paper, using the template provided above. Unlike ICCV 2023, peer review will be single-blind since the identities of participants will be known from the public leaderboard; for this reason, please include full author names and affiliations (i.e., do NOT anonymize your submission).

We intend to accept 5-6 CXR-LT competition track papers to CVAMD 2023, nominating 2-3 of these papers to be presented as orals. Note that 4-page submissions may be accepted and presented as posters, however they will NOT be published in ICCV proceedings. Only full 8-page submissions may be published in proceedings and presented as orals during the workshop.

When submitting, be sure to click "+ Create new submission..." and select "CXR-LT-2023" to indicate you are submitting to the competition track of the CVAMD workshop.


Chest radiography, like many diagnostic medical exams, produces a long-tailed distribution of clinical findings; while a small subset of diseases are routinely observed, the vast majority of diseases are relatively rare [1]. This poses a challenge for standard deep learning methods, which exhibit bias toward the most common classes at the expense of the important, but rare, “tail” classes [2]. Many existing methods [3] have been proposed to tackle this specific type of imbalance, though only recently with attention to long-tailed medical image recognition problems [4-6]. Diagnosis on chest X-rays (CXRs) is also a multi-label problem, as patients often present with multiple disease findings simultaneously; however, only a select few studies incorporate knowledge of label co-occurrence into the learning process [7-9].

Since most large-scale image classification benchmarks contain single-label images with a mostly balanced distribution of labels, many standard deep learning methods fail to accommodate the class imbalance and co-occurrence problems posed by the long-tailed, multi-label nature of tasks like disease diagnosis on CXRs [2].

To develop a benchmark for long-tailed, multi-label medical image classification, we expand upon the MIMIC-CXR-JPG [10] dataset by enlarging the set of target classes from 14 to 26 (see full details in “Data Description”), generating labels for 12 new disease findings by parsing radiology reports. This follows the procedure of Holste et al. [2], who added 5 new findings to MIMIC-CXR-JPG – Calcification of the Aorta, Subcutaneous Emphysema, Tortuous Aorta, Pneumomediastinum, and Pneumoperitoneum – to study long-tailed learning approaches for CXRs and Moukheiber et al. [11], who added 5 new classes – Chronic obstructive pulmonary disease, Emphysema, Interstitial lung disease, Calcification, Fibrosis to study ensemble methods for few-shot learning on CXRs.

Shared Task


This challenge will use an expanded version of MIMIC-CXR-JPG [10], a large benchmark dataset for automated thorax disease classification. Following Holste et al. [2], each CXR study in the dataset is labeled with a total of 12 new rare disease findings extracted from radiology reports. The resulting long-tailed (LT) dataset contains 377,110 CXRs, each labeled with at least one of 26 clinical findings (including a "No Finding" class).


Given a CXR, detect all pathologies present (or predict “No Finding” if none present). To do this, you will train multi-label thorax disease classifiers on the provided labeled training data.


Models will be evaluated on the provided testing set using “macro-averaged” mean Average Precision (mAP).

Online Evaluation

The competition will be conducted through the CodaLab platform.

Tentative Schedule

5/1/2023. Training data release and competition begins

7/14/2023. Test data release and final evaluation begins

7/17/2023. Test phase ends and competition is closed.

7/28/2023. Workshop paper submissions are due

8/4/2023. Paper acceptance notification

8/10/2023. Camera-ready papers due

10/6/2023. ICCV CVAMD workshop

Steering committee

Leo Anthony Celi
Zhiyong Lu
George Shih
Weill Cornell Medicine
Ronald M. Summers
NIH Clinical Center


Atlas Wang
UT at Austin
Yifan Peng
Weill Cornell Medicine
Greg Holste
UT at Austin
Alistair Johnson
Hospital for Sick Children

Ajay Jaiswal
UT at Austin
Mingquan Lin
Weill Cornell Medicine
Song Wang
UT at Austin
Yuzhe Yang


  1. Zhou SK, Greenspan H, Davatzikos C, Duncan JS, Van Ginneken B, Madabhushi A, Prince JL, Rueckert D, Summers RM. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. Proceedings of the IEEE. 2021 Feb 26;109(5):820-38.
  2. Holste G, Wang S, Jiang Z, Shen TC, Shih G, Summers RM, Peng Y, Wang Z. Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study. In Data Augmentation, Labelling, and Imperfections: Second MICCAI Workshop, DALI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings 2022 Sep 16 (pp. 22-32). Cham: Springer Nature Switzerland.
  3. Zhang Y, Kang B, Hooi B, Yan S, Feng J. Deep long-tailed learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2023 Apr 19.
  4. Zhang R, Haihong E, Yuan L, He J, Zhang H, Zhang S, Wang Y, Song M, Wang L. MBNM: multi-branch network based on memory features for long-tailed medical image recognition. Computer Methods and Programs in Biomedicine. 2021 Nov 1;212:106448.
  5. Ju L, Wang X, Wang L, Liu T, Zhao X, Drummond T, Mahapatra D, Ge Z. Relational subsets knowledge distillation for long-tailed retinal diseases recognition. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part VIII 24 2021 (pp. 3-12). Springer International Publishing.
  6. Yang Z, Pan J, Yang Y, Shi X, Zhou HY, Zhang Z, Bian C. ProCo: Prototype-Aware Contrastive Learning for Long-Tailed Medical Image Classification. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part VIII 2022 Sep 16 (pp. 173-182). Cham: Springer Nature Switzerland.
  7. Chen H, Miao S, Xu D, Hager GD, Harrison AP. Deep hierarchical multi-label classification of chest X-ray images. In International Conference on Medical Imaging with Deep Learning 2019 May 24 (pp. 109-120). PMLR.
  8. Wang G, Wang P, Cong J, Liu K, Wei B. BB-GCN: A Bi-modal Bridged Graph Convolutional Network for Multi-label Chest X-Ray Recognition. arXiv preprint arXiv:2302.11082. 2023 Feb 22.
  9. Chen B, Li J, Lu G, Yu H, Zhang D. Label co-occurrence learning with graph convolutional networks for multi-label chest x-ray image classification. IEEE Journal of Biomedical and Health Informatics. 2020 Jan 16;24(8):2292-302.
  10. Johnson AE, Pollard TJ, Greenbaum NR, Lungren MP, Deng CY, Peng Y, Lu Z, Mark RG, Berkowitz SJ, Horng S. MIMIC-CXR-JPG, a large publicly available database of labeled chest radiographs. arXiv preprint arXiv:1901.07042. 2019 Jan 21.
  11. Moukheiber D, Mahindre S, Moukheiber L, Moukheiber M, Wang S, Ma C, Shih G, Peng Y, Gao M. Few-Shot Learning Geometric Ensemble for Multi-label Classification of Chest X-Rays. In Data Augmentation, Labelling, and Imperfections: Second MICCAI Workshop, DALI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings 2022 Sep 16 (pp. 112-122). Cham: Springer Nature Switzerland.

Please contact cxr.lt.competition.2023@gmail.com if you have any questions. This webpage template is by courtesy of the awesome Georgia.

This comeptition is sponsored in part by the Artifical Intelligence Journal (AIJ).