ClinicalTrials.Veeva

Menu

Effect of Perception-based Interventions on Public Acceptance of Using Large Language Models in Medicine

P

Peking University

Status

Active, not recruiting

Conditions

Perception, Self
Acceptability of Health Care
Large Language Models

Treatments

Other: Perception-based interventions

Study type

Interventional

Funder types

Other

Identifiers

NCT07304908
NNSF72474005

Details and patient eligibility

About

Large language models (LLMs) show promise in medicine, but concerns about their accuracy, coherence, transparency, and ethics remain. To date, public perceptions on using LLMs in medicine and whether they play a role in the acceptability of health care applications of LLMs are not yet fully understood. This study aims to investigate public perceptions on using LLMs in medicine and if interventions for perceptions affect the acceptability of health care applications of LLMs.

Full description

Owing to rapid advances in artificial intelligence, large language models (LLMs) are increasingly being used in a variety of clinical settings such as triage, disease diagnosis, treatment planning, and self-monitoring. Despite their potential, the use of LLMs remains restricted within healthcare settings due to lack of accuracy, coherence, and transparency and ethical concerns. Public perceptions such as perceived usefulness and risks play a crucial role in shaping their attitudes towards artificial intelligence that can either facilitate or hinder its adoption. Yet, to our knowledge, there is lack of awareness about perception-driven interventions in health care and no previous studies have examined whether public perceptions play a role in the acceptability of medical applications of LLMs. Hence, this study aims to investigate public perceptions on using LLMs in medicine and if interventions for perceptions affect the acceptability of health care applications of LLMs.

Enrollment

3,000 estimated patients

Sex

All

Ages

18+ years old

Volunteers

Accepts Healthy Volunteers

Inclusion criteria

  • ≥18 years
  • Capable of completing an online survey
  • Agree to sign an informed consent form

Exclusion criteria

  • Unable to answer questions or communicate
  • Not willing to participate in this study

Trial design

Primary purpose

Other

Allocation

Randomized

Interventional model

Parallel Assignment

Masking

Single Blind

3,000 participants in 4 patient groups

Perceived benefits of large language models in medicine
Experimental group
Description:
Participants were asked to read "In April 2023, Massachusetts General Hospital launched a pilot program utilizing medical LLMs to assist with emergency department triage and initial diagnosis and observed a reduction in patient wait times and an improvement in clinical efficiency."
Treatment:
Other: Perception-based interventions
Perceived racial bias in large language models in medicine
Experimental group
Description:
Participants were asked to read "In November 2022, a research team from the University of California, San Francisco found that cutting-edge medical LLMs exhibited racial bias when recommending treatment plans."
Treatment:
Other: Perception-based interventions
Perceived ethical conflicts in large language models in medicine
Experimental group
Description:
Participants were required to read "In February 2023, a major European hospital network inadvertently leaked partially anonymized but still sensitive patient data during the testing of medical LLMs due to a system configuration error. Although no direct patient harm occurred, this increased public concerns regarding data privacy and security and compelled relevant institutions to conduct urgent reviews of their data protection measures."
Treatment:
Other: Perception-based interventions
Control
No Intervention group
Description:
No intervention

Trial contacts and locations

1

Loading...

Data sourced from clinicaltrials.gov

Clinical trials

Find clinical trialsTrials by location
© Copyright 2026 Veeva Systems