ClinicalTrials.Veeva

Menu

Impact and Safety of AI in Decision Making in the ICU: a Simulation Experiment

Imperial College London logo

Imperial College London

Status

Completed

Conditions

Sepsis

Treatments

Other: Hypothetical AI

Study type

Observational

Funder types

Other

Identifiers

NCT05495438
22CX7592

Details and patient eligibility

About

The impact of deploying artificial intelligence (AI) in healthcare settings in unclear, in particular with regards to how it will influence human decision makers. Previous research demonstrated that AI alerts were frequently ignored (Kamal et al., 2020 ) or could lead to unexpected behaviour with worsening of patient outcomes (Wilson et al., 2021 ). On the other hand, excessive confidence and trust placed in the AI could have several adverse consequences including ability to detect harmful AI decisions, leading to patient harm as well as human deskilling. Some of these aspects relate to automation bias.

In this simulation study, the investigators intend to measure whether medical decisions in areas of high clinical uncertainty are modified by the use of an AI-based clinical decision support tool. How the dose of intravenous fluids (IVF) and vasopressors administered by doctors in adult patients with sepsis (severe infection with organ failure) in the ICU), changes as a result of disclosing the doses suggested by a hypothetical AI will be measured. The area of sepsis resuscitation is poorly codified, with high uncertainty leading to high variability in practice. This study will not specifically mention the AI Clinician (Komorowski et al., 2018). Instead, the investigators will describe a hypothetical AI for which there is some evidence of effectiveness on retrospective data in another clinical setting (e.g. a model that was retrospectively validated using data from a different country than the source data used for model training) but no prospective evidence of effectiveness or safety. As such, it is possible for this hypothetical AI to provide unsafe suggestions. The investigators will intentionally introduce unsafe AI suggestions (in random order), to measure the sensitivity of our participants at detecting these.

Full description

The impact of deploying artificial intelligence (AI) in healthcare settings in unclear, in particular with regards to how it will influence human decision makers. Previous research demonstrated that AI alerts were frequently ignored (Kamal et al., 2020 ) or could lead to unexpected behaviour with worsening of patient outcomes (Wilson et al., 2021 ). On the other hand, excessive confidence and trust placed in the AI could have several adverse consequences including ability to detect harmful AI decisions, leading to patient harm as well as human deskilling. Some of these aspects relate to automation bias.

In this simulation study, the investigators intend to measure whether medical decisions in areas of high clinical uncertainty are modified by the use of an AI-based clinical decision support tool. How the dose of intravenous fluids (IVF) and vasopressors administered by doctors in adult patients with sepsis (severe infection with organ failure) in the ICU), changes as a result of disclosing the doses suggested by a hypothetical AI will be measured. The area of sepsis resuscitation is poorly codified, with high uncertainty leading to high variability in practice. This study will not specifically mention the AI Clinician (Komorowski et al., 2018). Instead, the investigators will describe a hypothetical AI for which there is some evidence of effectiveness on retrospective data in another clinical setting (e.g. a model that was retrospectively validated using data from a different country than the source data used for model training) but no prospective evidence of effectiveness or safety. As such, it is possible for this hypothetical AI to provide unsafe suggestions. The investigators will intentionally introduce unsafe AI suggestions (in random order), to measure the sensitivity of our participants at detecting these.

The investigators will examine what participant characteristics are linked with an increase likelihood of being influenced by the AI, and conduct a number of pre-specified subgroup analyses, e.g. junior versus senior ICU doctors, and separating those with a positive or a negative attitude towards AI.

Enrollment

38 patients

Sex

All

Ages

18+ years old

Volunteers

Accepts Healthy Volunteers

Inclusion criteria

  • Junior (senior house officer) or senior (registrar/fellow/consultant) ICU doctor

Exclusion criteria

  • Participants not meeting the inclusion criteria.

Trial design

38 participants in 1 patient group

ICU Clinicians
Treatment:
Other: Hypothetical AI

Trial contacts and locations

1

Loading...

Data sourced from clinicaltrials.gov

Clinical trials

Find clinical trialsTrials by location
© Copyright 2026 Veeva Systems