ClinicalTrials.Veeva

Menu

Physician Reasoning on Management Cases With Large Language Models

Stanford University logo

Stanford University

Status

Completed

Conditions

Clinical Decision-making

Treatments

Other: GPT-4

Study type

Interventional

Funder types

Other

Identifiers

Details and patient eligibility

About

This study will evaluate the effect of providing access to GPT-4, a large language model, compared to traditional management decision support tools on performance on case-based management reasoning tasks.

Full description

Artificial intelligence (AI) technologies, specifically advanced large language models like OpenAI's ChatGPT, have the potential to improve medical decision-making. Although ChatGPT-4 was not developed for its use in medical-specific applications, it has demonstrated promise in various healthcare contexts, including medical note-writing, addressing patient inquiries, and facilitating medical consultation. However, little is known about how ChatGPT augments the clinical reasoning abilities of clinicians.

Clinical reasoning is a complex process involving pattern recognition, knowledge application, and probabilistic reasoning. Integrating AI tools like ChatGPT-4 into physician workflows could potentially help reduce clinician workload and decrease the likelihood of mismanagement. However, ChatGPT-4 was not developed for clinical reasoning nor has it been validated for this purpose. Further, it may be subject to disinformation, including convincing confabulations that may mislead clinicians. If clinicians misuse this tool, it may not improve reasoning and could even cause harm. Therefore, it is important to study how clinicians use large language models to augment clinical reasoning prior to routine incorporation into patient care.

In this study, participants will be randomized to answer clinical management cases with or without access to ChatGPT-4. Each case has multiple components, and the participants will be asked to discuss their reasoning for each component. Answers will be graded by independent reviewers blinded to treatment assignment. A grading rubric was developed for each case by a panel of 4-7 expert discussants. Discussants independently developed a rubric for each case, and then any discrepancies were resolved through multiple rounds of discussions.

Enrollment

92 patients

Sex

All

Volunteers

Accepts Healthy Volunteers

Inclusion criteria

  • Participants must be licensed physicians and have completed at least post-graduate year 2 (PGY2) of medical training.
  • Training in Internal medicine, family medicine, or emergency medicine.

Exclusion criteria

  • Not currently practicing clinically.

Trial design

Primary purpose

Treatment

Allocation

Randomized

Interventional model

Parallel Assignment

Masking

Single Blind

92 participants in 2 patient groups

GPT-4
Active Comparator group
Description:
Group will be given access to GPT-4
Treatment:
Other: GPT-4
Usual Resources
No Intervention group
Description:
Group will not be given access to GPT-4 but will be encouraged to use any resources they wish besides large language models (UpToDate, Dynamed, google, etc).

Trial contacts and locations

1

Loading...

Central trial contact

Robert J Gallo, MD; Jonathan H Chen, MD, PhD

Data sourced from clinicaltrials.gov

Clinical trials

Find clinical trialsTrials by location
© Copyright 2026 Veeva Systems