Introduction: Quick and accurate diagnoses in the medical field are imperative to patient recovery and can be the difference between the patient’s life and death. Early and accurate detection allows for ample time for proper treatment, providing more opportunities to address the issue. While doctors aim to properly diagnose all patients, 11% of medical consults in the US result in a misdiagnosis, leading to unnecessary progression of the disease and often death (O’Mary, 2023).
ChatGPT is a virtual assistant language model that receives language prompts to produce human-like responses (Figure 1). ChatGPT-3.5 is a basic version that is available to the public at no cost; it does not have access to the internet. ChatGPT-4 is an advanced version of the tool available for subscribers at a monthly cost; it has limited access to the internet.
While both versions have a wide range of obvious functionalities such as answering a plethora of questions, assisting with translations, and brainstorming ideas, less obvious are their capabilities to assist within the medical field. This presentation will articulate the merits of classifying ChatGPT as a biomedical technology and medical assistant for physicians, and highlight areas for growth to implement ChatGPT as a medical tool.
Materials and
Methods: Through this literature review, I gathered and analyzed articles and studies related to the use of ChatGPT as a tool for medical diagnoses. Databases such as Google Scholar and PubMed were utilized. Relevant articles populated within the first two pages of the search engines and within the last 5 years were considered.
Results, Conclusions, and Discussions: ChatGPT as a medical aid can 1)provide faster and more accurate diagnoses and 2)provide ideas for niche diagnoses that span beyond the scope of medical training.
While physicians aim to provide quick and accurate diagnoses, there remains room for improvement. In a 2024 study, the team compared the accuracy of diagnoses between physicians and ChatGPT under different conditions (Ten Berg et al., 2024). When provided symptoms and laboratory data, physicians accurately diagnosed 87% of cases, while ChatGPT-3.5 accurately diagnosed 97%. Thus, future misdiagnoses could be avoided by physicians incorporating ChatGPT-3.5 to diagnose.
Human conditions constantly mutate and exceed the scope of medical training. In a 2023 study, the team provided a case for diagnosis by physicians, ChatGPT-3.5, and ChatGPT-4. Each was tasked to list 10 most likely diagnoses for the case. For rare cases, accurate diagnosis was listed 43% of the time by physicians, 60% by ChatGPT-3.5, and 83% by ChatGPT-4. This shows the superiority of ChatGPT in diagnosing rare conditions.
While ChatGPT provides benefits to diagnosing conditions, ChatGPT should be regarded as a medical assistant, not a physician replacement. ChatGPT-3.5 does not have access to the internet; instead, it recognizes language patterns and generates responses to prompts based on information it was trained on. Further, ChatGPT does not have access to a patient’s medical record, limiting its ability to provide personalized medical advice. Thus, ChatGPT should complement medical care, not supplement it.
Nonetheless, ChatGPT shows vast promise to assist physicians directly in the medical field. I recommend the development of a new version of ChatGPT, “Medical-GPT,” to be embedded into electronic medical record(EMR) systems. This add-on can help medical teams confirm diagnoses before recommending care plans.
Conclusion: Due to ChatGPT’s ability to provide fast and accurate diagnoses and diagnose rare conditions, ChatGPT can act as a medical aid and thus should be classified as a biomedical technology. However, while ChatGPT can complement medical care, it contains limitations and cannot supplement the role of a physician. I recommend prompt development of a “Medical-GPT” for EMRs. Through maximizing ChatGPT in this way, we can decrease the alarming rate of misdiagnoses.
Acknowledgements (Optional):
References (Optional): Mehnen, L., Gruarin, S., Vasileva, M., & Knapp, B. (2023). ChatGPT as a medical doctor? A diagnostic accuracy study on common and rare diseases. MedRxiv, 2023-04. https://doi.org/10.1101/2023.04.20.23288859
Ten Berg, H., van Bakel, B., van de Wouw, L., Jie, K. E., Schipper, A., Jansen, H., ... & Kurstjens, S. (2024). ChatGPT and generating a differential diagnosis early in an emergency department presentation. Annals of Emergency Medicine, 83(1), 83-86. https://doi.org/10.1016/j.annemergmed.2023.08.003
Toole, J., Kohansieh, M., Khan, U., Romero, S., Ghali, M., Zeltser, R., & Makaryus, A. N. (2020). Does your patient understand their treatment plan? Factors affecting patient understanding of their medical care treatment plan in the inpatient setting. Journal of Patient Experience, 7(6), 1151-1157. https://doi.org/10.1177/2374373520948400
Turner, E. (2020, March 5). Most Americans rely on their own research to make big decisions, and that often means online searches. Pew Research Center. https://www.pewresearch.org/short-reads/2020/03/05/most-americans-rely-on-their-own-research-to-make-big-decisions-and-that-often-means-online-searches/
WebMD. (2023, July 19). Misdiagnosis seriously harms 795,000 people annually: Study. WebMD. https://www.webmd.com/a-to-z-guides/news/20230719/misdiagnosis-seriously-harms-people-annually-study
Yadav, A. K., Budhathoki, S. S., Paudel, M., Chaudhary, R., Shrivastav, V. K., & Malla, G. B. (2019). Patients understanding of their diagnosis and treatment plans during discharge in emergency ward in a tertiary care centre: a qualitative study. JNMA: Journal of the Nepal Medical Association, 57(219), 357. 10.31729/jnma.4639