AMIA 2023 Annual Symposium Panel on

Large Language Models in Healthcare: Opportunities and Challenge

Location: New Orleans, LA, USA
Time: 03:30 PM - 05:00 PM, November 13, 2023

Panelists


Overview

The emergence of ChatGPT and other large language models (LLMs) has the potential to revolutionize research and clinical practice. For instance, it is exciting that ChatGPT can seemingly understand the context and generate grammatically correct and semantically meaningful answers, compose essays that are often indistinguishable from those written by humans, and author captivating medical research abstracts. However, concerns have also been raised about the impact of these tools in health care, education, research, and beyond. One notable concern is the potential for LLMs to reinforce disparities in healthcare, as these models are typically trained on data that is historically biased against certain disadvantaged groups. Another concern is the potential for LLMs to be applied for malicious purposes. Although it is widely accepted that ChatGPT and LLMs should be used with integrity, transparency, and honesty, how to appropriately do so and, if needed, regulate the development, and use of this technology needs further discussion.

We believe that the proposed panel is timely and urgently needed for AMIA stakeholders, including informaticists from a broad array of disciplines, technology companies, and research funders to discuss and debate the development and use of these models to ensure that their potential benefits are realized, while any potential risks and negative consequences are minimized. This panel will also likely be one of many conversations at AMIA 2023 about this issue as we learn more about LLMs, their capacity, and their potential impact on healthcare.

In this panel, four experts will discuss the opportunities and challenges of LLMs in areas ranging from research to education to clinical care. After a brief presentation by each speaker, Dr. Peng will moderate a far-ranging panel discussion on the healthcare applications of ChatGPT.


Tentative Schedule

15 min. Panel 1: ChatGPT/LLMs for clinical text mining (Xia “Ben” Hu)

15 min. Panel 2: ChatGPT/LLMs for evidence-based medicine (Yifan Peng)

15 min. Panel 3: Ethical and bias concerns of ChatGPT/LLMs (Bradley Malin)

15 min. Panel 4: GPT-Generated Synthetic Clinical Notes: Balancing Privacy and Utility (Xiaoqian Jiang)

30 min. QA


About the speakers

Yifan Peng, Ph.D., Assistant Professor in the Division of Health Sciences Department of Population Health Sciences at Weill Cornell Medicine. Dr. Peng's main research interests include BioNLP and medical image analysis. To facilitate research on language representations in the biomedicine domain, one of his studies present the Biomedical Language Understanding Evaluation (BLUE) benchmark, a collection of resources for evaluating and analyzing biomedical natural language representation models4. Detailed analysis shows that BLUE can be used to evaluate the capacity of the models to understand the biomedicine text and, moreover, to shed light on the future directions for developing biomedicine language representations. As the panel moderator, Dr. Peng will describe the current state of LLMs and list their unique opportunities and challenges compared to other language models.

Xiaoqian Jiang, Ph.D., FACMI, Professor at Department of Health Data Science and Artificial Intelligence at UTHealth. Dr. Xiaoqian Jiang serves as the Associate Vice President of Medical AI at the University of Texas Health Science Center at Houston (UTHealth). Holding the Christopher Sarofim Family Professorship, he is also the Department Chair of Data Science and Artificial Intelligence. Dr. Jiang leads the Secure Artificial Intelligence For Healthcare (SAFE) center at the McWilliams School of Biomedical Informatics (MSBMI) and his expertise lies in health data privacy and predictive modeling in biomedicine, rooted in his Ph.D. in Computer Science from Carnegie Mellon University.

Xia “Ben” Hu, Ph.D., Associate Professor in Computer Science at Rice University. Dr. Hu has published over 200 papers in several major academic venues, including NeurIPS, ICLR, KDD, WWW, IJCAI, AAAI, etc. An open-source package developed by his group, namely AutoKeras, has become the most used automated deep learning system on Github (with over 8,000 stars and 1,000 forks). Also, his work on deep collaborative filtering, anomaly detection and knowledge graphs have been included in the TensorFlow package, Apple production system and Bing production system, respectively. His papers have received several Best Paper (Candidate) awards from venues such as ICML, WWW, WSDM, ICDM, AMIA and INFORMS. He is the recipient of NSF CAREER Award and ACM SIGKDD Rising Star Award. His work has been cited more than 20,000 times with an h-index of 58. He is the conference General Co-Chair for WSDM 2020 and ICHI 2023.

Chunhua Weng (Moderator), Ph.D., FACMI, Professor of Biomedical Informatics at Columbia University. The Weng Lab focuses on Clinical Research Informatics research and develops novel methods to improve the efficiency and generalizability of clinical trials research, to facilitate human phenotyping and genetic disease diagnoses using electronic health records data, and to automate clinical evidence computing. Dr. Weng has also published extensively on data-driven optimization of clinical research participants selection, EHR data quality assessment and data analytics, Augmented Intelligence (AI) for clinical research staff, and text knowledge engineering methods using a variety of text (e.g., EHR narratives, PubMed abstracts and clinical trial summaries). Dr. Weng is currently an Associate Editor for Journal of Biomedical Informatics.

Bradley Malin, Ph.D., FACMI, Accenture Professor of Biomedical Informatics, Biostatistics, and Computer Science at Vanderbilt University. Dr. Malin's research is on the development of technologies to enable artificial intelligence and machine learning (AI/ML) in the context of organizational, political, and health information architectures. He is the PI of the Ethics and Trustworthy AI Core of the recently established NIH Bridge2AI program. In this panel, Dr. Malin will discuss ethical societal challenges brought on by LLMs, and review relevant case studies, such as Koko’s use of ChatGPT for the provision of mental health support6. In this respect, one risk is that the model may start generating nonsensical or inappropriate responses if it is not properly monitored and supervised during training.


Please contact Yifan Peng if you have question. The webpage template is by the courtesy of awesome Georgia.