Status
Conditions
Treatments
About
Large language models (LLMs) show promise in medicine, but concerns about their accuracy, coherence, transparency, and ethics remain. To date, public perceptions on using LLMs in medicine and whether they play a role in the acceptability of health care applications of LLMs are not yet fully understood. This study aims to investigate public perceptions on using LLMs in medicine and if interventions for perceptions affect the acceptability of health care applications of LLMs.
Full description
Owing to rapid advances in artificial intelligence, large language models (LLMs) are increasingly being used in a variety of clinical settings such as triage, disease diagnosis, treatment planning, and self-monitoring. Despite their potential, the use of LLMs remains restricted within healthcare settings due to lack of accuracy, coherence, and transparency and ethical concerns. Public perceptions such as perceived usefulness and risks play a crucial role in shaping their attitudes towards artificial intelligence that can either facilitate or hinder its adoption. Yet, to our knowledge, there is lack of awareness about perception-driven interventions in health care and no previous studies have examined whether public perceptions play a role in the acceptability of medical applications of LLMs. Hence, this study aims to investigate public perceptions on using LLMs in medicine and if interventions for perceptions affect the acceptability of health care applications of LLMs.
Enrollment
Sex
Ages
Volunteers
Inclusion criteria
Exclusion criteria
Primary purpose
Allocation
Interventional model
Masking
3,000 participants in 4 patient groups
Loading...
Data sourced from clinicaltrials.gov
Clinical trials
Research sites
Resources
Legal