ClinicalTrials.Veeva

Menu

Exploring the Use of AI-Assisted Video Monitoring to Predict Accidental Events in ICU Patients (AIME-ICU)

Fudan University logo

Fudan University

Status

Not yet enrolling

Conditions

Prediction of Accidental Events Using AI-Assisted Video Analysis
ICU Patient Safety and Early Warning System
Behavioral Monitoring in ICU

Treatments

Other: AI-Assisted Video Monitoring

Study type

Observational

Funder types

Other

Identifiers

NCT07307521
B2024-512

Details and patient eligibility

About

This study aims to improve the safety and care of patients in the Intensive Care Unit (ICU) by using artificial intelligence (AI) to analyze video monitoring. ICU patients often face serious risks such as delirium, accidental removal of breathing tubes or lines, and sleep problems. These events can lead to medical emergencies, longer ICU stays, higher costs, and worse outcomes.

To address these challenges, we will place a small video camera above each ICU bed. The camera will record patient movements, body activity, and sleep patterns. At the same time, routine medical monitors will record heart rate, blood oxygen levels, and other vital signs. Noise levels in the room will also be measured. All these data help us understand the patient's behavior and condition more accurately.

The video recording does not involve extra treatment or additional procedures. All data are collected passively and safely. Patient privacy is strictly protected: the system will blur faces or replace them with digital avatars, and any information that could identify the patient or the environment will be masked. All videos are stored securely inside the hospital and are processed only after privacy protection.

Using these recordings, an AI model will be trained to recognize early warning signs of dangerous situations. For example, the system may detect early movements that suggest the patient is becoming agitated, confused, or trying to remove medical tubes. It may also identify severe sleep disturbance that may lead to delirium. If the AI can recognize these early changes, medical staff can intervene sooner and prevent harm.

About 300 patients from Fudan University Zhongshan Hospital will participate. Participation is voluntary. Patients or families will sign an informed consent form before being enrolled. The study has three stages:

Screening - understanding the study and signing consent. Data collection - video and medical monitor data are collected during the ICU stay.

Follow-up - telephone or in-person follow-up at 1 month and 6 months after discharge to evaluate recovery, sleep, mental status, and overall safety.

There are no direct medical risks from participating in this study because it only collects behavioral and monitoring data. The cameras do not interfere with treatment. Privacy and data security are the main considerations, and all measures strictly follow national laws and hospital regulations.

Participants may benefit from earlier identification of dangerous situations, which may help prevent accidental tube removal, severe agitation, or other emergencies. Even if no direct benefit occurs, the information collected may help improve future ICU care by enabling safer and more accurate monitoring systems.

Taking part in the study will not affect the patient's medical care. Patients may withdraw at any time without any consequences or loss of benefits.

This study hopes to build a reliable AI tool that can assist nurses and doctors in recognizing early signs of trouble, improving safety, and enhancing the quality of care for ICU patients.

Full description

This study investigates whether artificial intelligence (AI)-assisted video monitoring can identify early behavioral changes that precede accidental or harmful events in Intensive Care Unit (ICU) patients. ICU patients are vulnerable to a series of sudden and potentially dangerous events-such as agitation, delirium, accidental device removal, and significant sleep disruption-many of which develop gradually and are difficult to detect solely from routine physiological monitoring. This project aims to determine whether AI analysis of continuous bedside video recordings, combined with noise-level information and vital-sign data already collected during standard ICU care, can provide clinicians with timely warnings before these events occur.

Rationale Traditional ICU monitoring systems focus on physiological parameters such as heart rate, blood pressure, and oxygen saturation. While essential, these measurements do not fully represent patient behavior. Many high-risk events are preceded by subtle motor patterns or behavioral cues-for example, repeated reaching toward tubes, rising restlessness, or disturbed sleep cycles. Such cues are often intermittent, brief, or masked by sedation or other treatments, making them difficult for staff to detect in busy clinical environments.

Computer vision and AI technologies offer an opportunity to objectively observe and interpret patient movements and behavioral trends continuously, without adding clinical workload. By integrating video information with physiologic data and environmental noise levels, the AI system may identify patterns that indicate emerging delirium, increased agitation, or imminent attempts to remove medical devices. Early identification may support timely preventive interventions and reduce the rates of adverse events.

Study Overview The study will prospectively enroll ICU patients who consent to video monitoring and data use. A small camera will be installed above each bed to continuously capture patient movement and posture. The camera view is restricted to the patient zone, excluding unnecessary areas such as the nursing station. All recordings follow strict privacy-protection procedures, including automated face masking, background blurring, and removal of identifying information from objects in the frame.

Environmental noise is recorded through a decibel meter, and routine vital-sign data are synchronized with the video timeline. These combined multimodal data will serve as input for AI model development.

The study is divided into three components:

Data collection phase - real-world continuous recording of behavioral and physiological data.

Data processing and annotation - cleaning, de-identification, and labeling of key behavioral events by trained researchers.

Model development and evaluation - training AI models to identify behavioral patterns associated with clinically meaningful events, and evaluating their predictive performance.

Data Integration and Processing

All raw videos remain stored securely inside the hospital's protected data environment and are not transferred outside. A standardized de-identification pipeline is applied before any analytical use. This includes:

Masking or replacing patient faces. Removing identifying elements such as bed numbers and equipment labels. Blurring all background areas outside the patient zone. Excluding frames containing staff faces or unrelated activities. After de-identification, videos are aligned with vital-sign and noise-level timelines to create multimodal time-series datasets. Human annotators, trained with a unified labeling guideline, identify episodes of agitation, possible delirium-related behavior, attempts at device removal, and sleep-wake transitions. These labels serve as ground truth for AI training.

AI Model Development Multiple AI architectures will be explored, particularly those suited for temporal video analysis. Potential approaches include convolutional neural networks (CNNs), 3D CNNs, long short-term memory networks (LSTM), or transformer-based models capable of learning long-range dependencies in behavior sequences. Additional feature extraction methods will be evaluated to integrate physiologic and environmental signals.

To avoid model overfitting and ensure generalizability, the dataset will be split into training, validation, and independent test sets. Cross-validation will be used during parameter tuning. Model output will include risk scores or prediction probabilities indicating the likelihood of an impending accidental event.

Performance will be evaluated using accuracy, sensitivity, specificity, F1 score, and lead time (the time interval between system alert and actual event). The lead-time metric is particularly important because practical utility in clinical care depends on whether alerts occur early enough for staff to intervene.

Outcome Interpretation This study does not impose any medical intervention on participants. All adverse events are part of routine clinical care; the study merely investigates whether AI can anticipate them. Through continuous monitoring and analytical modeling, the research aims to quantify how much predictive information is contained in patient behavior, movement patterns, and environmental context captured by video.

The findings will help determine the feasibility and clinical value of AI-assisted behavioral monitoring in real-world ICUs. If successful, such systems may provide early warnings of delirium, accidental device removal, or other behavior-linked risks. This may reduce emergency interventions, shorten ICU stays, and improve overall patient safety.

Follow-Up To understand the longer-term relevance of the AI predictions, patients will undergo follow-up assessments after discharge at 1 month and 6 months. Follow-up evaluates general health recovery, sleep status, cognition, and whether any delayed complications occurred. Patient and family feedback regarding video monitoring-including comfort level, perceived benefit, or privacy concerns-will also be collected to guide system refinement.

Ethical and Privacy Considerations The study emphasizes privacy protection and informed consent. Cameras are positioned to minimize exposure of unnecessary areas. De-identification is applied before analysis, and all data are managed within controlled hospital systems. Participants may withdraw at any point without affecting their care. The study involves no experimental treatment or additional medical procedures beyond standard ICU monitoring.

Scientific and Clinical Significance This research addresses a critical gap in ICU safety: behavior-based early warning. By combining AI, video analysis, physiology, and environmental data, the study explores an approach that could complement routine monitoring. Beyond predicting specific events, the project may contribute to broader understanding of ICU patient behavioral trajectories and the role of environmental factors such as noise.

The long-term vision is to create a clinically deployable system that supports early intervention, reduces preventable harm, and enhances the efficiency of ICU care.

Enrollment

300 estimated patients

Sex

All

Volunteers

No Healthy Volunteers

Inclusion criteria

  • Adult or pediatric patients admitted to the Intensive Care Unit (ICU).
  • Patient or legally authorized representative is capable of understanding the study information and providing informed consent.
  • Patient is expected to remain in the ICU long enough to allow video and physiologic data collection.
  • Agreement to participate and allow video monitoring during the ICU stay.

Exclusion criteria

  • Refusal to participate from the patient or legally authorized representative.
  • Patients for whom continuous video monitoring is medically inappropriate or not feasible (e.g., isolation conditions preventing camera installation).
  • Patients whose condition or legal status requires special restrictions on video recording (e.g., certain forensic or custodial cases).
  • Any situation judged by the clinical team to place the patient at increased privacy or safety risk by participation.
  • Withdrawal of consent at any point during the study.

Trial design

300 participants in 1 patient group

Single Cohort
Description:
This cohort includes ICU patients who undergo continuous bedside video monitoring combined with routine vital-sign collection. Video, physiologic, and noise-level data are used for AI-based analysis to identify patterns associated with delirium, agitation, and accidental device removal. No clinical treatment or care procedures are altered. The study is observational and involves data collection only
Treatment:
Other: AI-Assisted Video Monitoring

Trial contacts and locations

0

Loading...

Central trial contact

Gu zhunyong, Gu

Data sourced from clinicaltrials.gov

Clinical trials

Find clinical trialsTrials by location
© Copyright 2026 Veeva Systems