Status
Conditions
Treatments
About
This study aims to systematically measure the extent and patterns of automation bias among physicians when utilizing ChatGPT-4o in clinical decision-making.
Full description
Diagnostic errors represent a significant cause of preventable patient harm in healthcare systems worldwide. Recent advances in Large Language Models (LLMs) have shown promise in enhancing medical decision-making processes.
However, there remains a critical gap in our understanding of how automation bias -- the tendency to over-rely on technological suggestions -- influences medical doctors' diagnostic reasoning when incorporating these AI tools into clinical practice.
Automation bias presents substantial risks in clinical environments, particularly as AI tools become more integrated into healthcare workflows. Although LLMs such as ChatGPT-4o offer potential advantages in reducing errors and improving efficiency, their lack of rigorous medical validation raises concerns about potentially amplifying cognitive biases through the generation of incorrect or misleading information.
Multiple contextual factors can exacerbate automation bias in medical settings: time constraints in high-volume clinical settings, financial incentives that prioritize efficiency over thoroughness, cognitive fatigue during extended shifts, and diminished vigilance when confronting diagnostically challenging cases.
These factors may interact with psychological mechanisms that include the diffusion of responsibility, overconfidence in technological solutions, and cognitive offloading---collectively increasing the risk of uncritical acceptance of AI-generated recommendations.
This randomized controlled trial (RCT) aims to systematically measure the extent and patterns of automation bias among physicians when utilizing ChatGPT-4o in clinical decision-making. The investigators will assess how access to LLM-generated information influences diagnostic reasoning through a novel methodology that precisely quantifies automation bias. In this study, participants will be randomly assigned to one of two groups. The treatment group will receive LLM-generated recommendations containing deliberately introduced errors in a subset of cases, while the control group will receive LLM-generated recommendations without such deliberately introduced errors. Participants will evaluate six clinical vignettes randomly sequenced to prevent detection patterns. The flawed vignettes provided to the treatment group will incorporate subtle yet clinically significant errors that should be identifiable by trained doctors. This will enable investigators to quantify the degree of automation bias by measuring the differential in diagnostic accuracy scores between the treatment and control groups.
Prior to participation, all physicians will complete a comprehensive training program covering LLM capabilities, prompt engineering techniques, and output evaluation strategies. Responses will be evaluated by blinded reviewers using a validated assessment rubric specifically designed to detect uncritical acceptance of erroneous information, with greater score disparities indicating stronger automation bias. This naturalistic approach will yield insights directly applicable to real clinical workflows, where mounting cognitive demands may progressively impact diagnostic decision quality.
Enrollment
Sex
Volunteers
Inclusion criteria
Exclusion criteria
Primary purpose
Allocation
Interventional model
Masking
50 participants in 2 patient groups
Loading...
Central trial contact
Ihsan Ayyub Qazi, PhD; Ayesha Ali, PhD
Data sourced from clinicaltrials.gov
Clinical trials
Research sites
Resources
Legal