Dyspnea severity assessment based on vocalization behavior with deep learning on the telephone
Artículo
Open/ Download
Access note
Acceso abierto
Publication date
2023Metadata
Show full item record
Cómo citar
Alvarado Gutiérrez, Eduardo Alexis
Cómo citar
Dyspnea severity assessment based on vocalization behavior with deep learning on the telephone
Author
Abstract
In this paper, a system to assess dyspnea with the mMRC scale, on the phone, via deep learning, is proposed. The method is based on modeling the spontaneous behavior of subjects while pronouncing controlled phonetization. These vocalizations were designed, or chosen, to deal with the stationary noise suppression of cellular handsets, to provoke different rates of exhaled air, and to stimulate different levels of fluency. Time-independent and time-dependent engineered features were proposed and selected, and a k-fold scheme with double validation was adopted to select the models with the greatest potential for generalization. Moreover, score fusion methods were also investigated to optimize the complementarity of the controlled phonetizations and features that were engineered and selected. The results reported here were obtained from 104 participants, where 34 corresponded to healthy individuals and 70 were patients with respiratory conditions. The subjects' vocalizations were recorded with a telephone call (i.e., with an IVR server). The system provided an accuracy of 59% (i.e., estimating the correct mMRC), a root mean square error equal to 0.98, false positive rate of 6%, false negative rate of 11%, and an area under the ROC curve equal to 0.97. Finally, a prototype was developed and implemented, with an ASR-based automatic segmentation scheme, to estimate dyspnea on line.
Patrocinador
ANID/COVID 0365
ANID/FONDECYT 1211946
Indexation
Artículo de publicación WoS
Quote Item
Sensors 2023, 23, 2441
Collections
The following license files are associated with this item: