Hackensack University Medical Center Evaluates ChatGPT's New Image Processing Features   

Hackensack University Medical Center Evaluates ChatGPT's New Image Processing Features

AI model falls short on accuracy in Orthopedic In-Training Examinations

Chat GPT

While artificial intelligence (AI) models such as ChatGPT hold potential for medical field application, their efficacy, particularly in orthopedic surgery, has yet to be determined. A recent Hackensack University Medical Center research study evaluated recently rolled-out image analysis capabilities.

Published in Cureus, the study “Evaluating ChatGPT's Capabilities on Orthopedic Training Examinations: An Analysis of New Image Processing Features” assessed ChatGPT's performance answering Orthopedic In-Training Examination (OITE) questions, including those that require image analysis.

Gregg Klein, M.D., helped conduct the study, which included 940 from the 2014, 2015, 2021 and 2022 AAOS OITE in final analysis. All questions without images were entered into ChatGPT 3.5 and 4.0 twice. Questions that required images were only entered into ChatGPT 4.0, as only this version can analyze images. The responses were recorded and compared to AAOS's correct answers to measure AI's accuracy and precision.

ChatGPT 4.0 performed significantly better on questions that did not require image analysis. While the use of AI in orthopedics holds potential, this evaluation demonstrates how, even with the addition of image processing capabilities, ChatGPT currently falls short in terms of accuracy.

The study calls for future research to harness AI's potential, with a focus on ensuring that AI models complement rather than attempt to replace the orthopedic surgeons’ nuanced skills.

Learn more about our innovations in orthopedic care.

USNWR Orthopedics

We use cookies to improve your experience. Please read our Privacy Policy or click Accept.
X