Status
Conditions
About
The assessment of AI -based prediction models in detecting AKI early in critically ill patients. Specifically, the aim is to evaluate the model's ability to predict the onset of AKI before it clinically manifests allowing for early interventions
Full description
Acute kidney injury (AKI) is the most severe, common, and life-threatening complication in hospitalized patients and is associated with high morbidity and mortality rates . It has been demonstrated that AKI affects approximately 30-60% of critically ill patients, especially those in the intensive care unit (ICU) . Despite the recent advances in clinical care and dialysis technology, the occurrence of AKI in ICU patients has a mortality rate of up to 50%, which is 1.5 to 2-fold to that of ICU patients without AKI . However, if detected and managed promptly, interventions guided by established recommendations, such as those provided by KDIGO, may mitigate the risk of further deterioration in AKI patients . Therefore, identifying individuals at high risk of AKI is vital for managing critically ill patients.
Artificial intelligence (AI) and machine learning (ML) represent emerging technologies that could use large amounts of health-related data to help physicians make better clinical decisions and improve individual health outcomes. While serum creatinine (Scr) and urine output serve as diagnostic criteria for AKI, delays in their detection may occur. Therefore, early identification of patients at risk of developing AKI is crucial to create a window for preventive interventions and mitigate the risk of further deterioration. Several previous studies have developed various ML-based models to predict AKI in critically ill patients due to the potential benefits of early detection of AKI . It is critical to remove the mystery surrounding ML since doing so makes it simpler for doctors to comprehend the reasoning behind ML . In order to explain why ML makes the choices it does, a new field called Explainable AI (XAI) has emerged. Two of the most popular methods for explaining are Local Interpretable Model-Agnostic Explanation (LIME) and Shapley Additive Explanation (SHAP) . Novel interpretable approaches have been effectively utilized to explain ML models for preventing hypoxemia during surgery [10], predicting mortality in sepsis and AKI , predicting the occurrence of AKI following cardiac surgery , and predicting antibiotic resistance .
To the best of our knowledge, the reliability and robustness of explanatory techniques for detecting AKI in critically sick patients have rarely been studied. Therefore, the present study was conducted to construct an ML approach for the early prediction of AKI in ICU patients and to apply XAIs to make ML more transparent and interpretable.
Enrollment
Sex
Ages
Volunteers
Inclusion criteria
Exclusion criteria
• patients under 18 years old
Loading...
Central trial contact
Radwa Awad Abd El Hafez, lecturer; Kareem Sherif Mosabah, Assistant lecturer
Data sourced from clinicaltrials.gov
Clinical trials
Research sites
Resources
Legal