Status
Conditions
Treatments
About
In this prospective study we extracted acoustic parameters using PRAAT from patient's attempt to phonate during the clinical evaluation using a digital smart device. From these parameters we attempted (1) to define which of the PRAAT acoustic features best help to discriminate patients with dysphagia (2) to develop algorithms using sophisticated ML techniques that best classify those i) with dysphagia and those ii ) at high risk of respiratory complications due to poor cough force.
Full description
This study was prospective study, and patients who visited the department of rehabilitation medicine in a single university-affiliated tertiary hospital with dysphagic symptoms from September 2019 to March 2021 were included.Voice recording was performed at the enrollment with blinded assessment, where the participants first visited the rehabilitation department with chief complaints of dysphagia. The cough sounds were recorded with an iPad (Apple, Cupertino, CA, USA) through an embedded microphone.
From the acoustic files we extracted fourteen voice parameters that include the average value and standard deviation of the fundamental frequency (f0), harmonic-to-noise ratio (HNR), the jitter that refers to frequency instability, and the shimmer that represents the amplitude instability of the sound signal.
Machine learning algorithms and sophisticated deep neural network analysis will be performed.
Enrollment
Sex
Ages
Volunteers
Inclusion criteria
Exclusion criteria
449 participants in 2 patient groups
Loading...
Data sourced from clinicaltrials.gov
Clinical trials
Research sites
Resources
Legal