A robust front-end for speech recognition based on computational auditory scene analysis and speaker model
From MaRDI portal
Publication:3572227
zbMATH Open1212.68197MaRDI QIDQ3572227FDOQ3572227
Authors: Yong Guan, Peng Li, Wenju Liu, Bo Xu
Publication date: 8 July 2010
Recommendations
- Binaural classification-based speech segregation and robust speaker recognition system
- Combining speech enhancement and auditory feature extraction for robust speech recognition
- Noise-robust speech recognition through auditory feature detection and spike sequence decoding
- Principles and typical computational limitations of sparse speaker separation based on deterministic speech features
- A computational auditory scene analysis-enhanced beamforming approach for sound source separation
Cited In (4)
- Image processing techniques for segments grouping in monaural speech separation
- Binaural classification-based speech segregation and robust speaker recognition system
- Front-End, Back-End, and Hybrid Techniques for Noise-Robust Speech Recognition
- Robust speech recognition method based on discriminative environment feature extraction
This page was built for publication: A robust front-end for speech recognition based on computational auditory scene analysis and speaker model
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3572227)