CVPR 2021 Tutorial onWhen Image Analysis Meets Natural Language Processing: A Case Study in Radiology |
||
Slides and recorded videos will be provided on this webpage. Time: TBD |
Recently, the emerging techniques of deep learning have been widely and successfully applied to many different computer vision and text-mining tasks. However, when adopted in a certain domain, such as radiology, these techniques should be combined with extensive domain knowledge to improve efficiency and accuracy. There is, therefore, a critical need to take advantage of medical image analysis, clinical text-mining, and deep learning to better understand the radiological world, and promise to enhance clinical communication and patient-centric care.
The tutorial aims to bridge the gap between medical imaging and medical informatics research, facilitate collaborations between the communities, and introduce new paradigms of machine learning exploiting the latest innovations across domains. It will cover the basics of medical image analysis and clinical text-mining with concrete examples, as well as deep learning algorithms. The audience will also have the opportunity to get the cutting-edge examples of recent advancements in deep learning architectures adopted to the medical domain. Specifically, the tutorial deliberations will be on the following themes.
30 mins. Clinical context for medical imaging deep learning models: model fusion techniques to combine medical imaging with structured clinical data. Matthew Lungren.
30 mins. Deep learning for cardiovascular imaging applications. Subhi Al'Aref.
30 mins. Building high performance chest x-ray classification models and understanding why they are all wrong . Alistair Johnson.
30 mins. Break.
30 mins. Natural language processing on radiology reports to generate large labeled dataset. Imon Banerjee.
30 mins. Interpretable deep learning model for multiple modal and cross-domain medical images. Yingying Zhu.
30 mins. Clinical NLP-powered data extraction on CXR and CT reports. Yifan Peng.
Subhi Al'Aref, M.D., Assistant Professor in the Division of Cardiology in the Department of Internal Medicine at the University of Arkansas for Medical Sciences College of Medicine. His main research interests include the investigation of the diagnostic and prognostic utility of noninvasive cardiovascular imaging modalities.
Imon Banerjee, Ph.D., Assistant Professor in the Department of Biomedical Informatics and Radiology at Emory University School of Medicine. Her core expertise is unstructured data analysis with deep learning and machine representation of image semantics. She published several manuscripts proposing novel methods for radiology text mining in top-tier journals and conferences.
Alistair Johnson, Ph.D., Scientist at the Hospital for Sick Children in Toronto, Canada. Alistair has extensive experience and expertise in working with clinical data, having published the MIMIC-III, MIMICIV, eICU-CRD datasets, and most recently MIMIC-CXR, a large publicly available dataset of chest x-rays.
Matthew Lungren, M.D., MPH, Co-Director of the Stanford Center for Artificial Intelligence in Medicine and Imaging and Associate Professor Clinician Scientist at Stanford University Medical Center. His NIH and NSF funded research are in the field of AI and deep learning in medical imaging, precision medicine, and predictive health outcomes.
Yifan Peng, Ph.D., Assistant Professor at the Department of Population Health Sciences at Weill Cornell Medicine. His main research interests include biomedical and clinical natural language processing and medical image analysis.
Yingying Zhu, Ph.D., Assistant Professor at the Department of Computer Science and Engineering at the University of Texas at Arlington. Her research lies in the intersection of machine learning, computer vision, medical imaging analysis, and bioinformatics. She has published many papers in top computer vision and medical imaging conferences and journals.
Please contact Yifan Peng if you have question. The webpage template is by the courtesy of awesome Georgia.