A computational auditory scene analysis-enhanced beamforming approach for sound source separation
DOI10.1155/2009/403681zbMATH Open1192.94041DBLPjournals/ejasp/DrakeRZK09OpenAlexW2088740872WikidataQ59249115 ScholiaQ59249115MaRDI QIDQ983758FDOQ983758
Authors: L. A. Drake, J. C. Rutledge, J. Zhang, Aggelos K. Katsaggelos
Publication date: 26 July 2010
Published in: EURASIP Journal on Advances in Signal Processing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1155/2009/403681
Recommendations
- Adaptive separation of acoustic sources for anechoic conditions: A constrained frequency domain approach
- Acoustic source localization and deconvolution-based separation
- Signal Separation by Integrating Adaptive Beamforming with Blind Deconvolution
- The auditory organization of speech and other sources in listeners and computational models
- Blind Audio Source Separation Using Sparsity Based Criterion for Convolutive Mixture Case
Pattern recognition, speech recognition (68T10) Signal theory (characterization, reconstruction, filtering, etc.) (94A12)
Cites Work
Cited In (4)
- A novel signal-processing strategy for hearing-aid design: Neurocompensation
- Microphone arrays for hearing aids: An overview
- A robust front-end for speech recognition based on computational auditory scene analysis and speaker model
- A waveform generation model-based approach for segregation of monaural mixed sound
This page was built for publication: A computational auditory scene analysis-enhanced beamforming approach for sound source separation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q983758)