A SPEECH SYNTHESIZER USING FACIAL EMG SIGNALS
From MaRDI portal
Publication:3631246
DOI10.1142/S1469026808002119zbMATH Open1178.68511MaRDI QIDQ3631246FDOQ3631246
Makoto Ohga, Toshio Tsuji, Jun Arita, Nan Bu
Publication date: 5 June 2009
Published in: International Journal of Computational Intelligence and Applications (Search for Journal in Brave)
Recommendations
- Pattern classification of time-series EMG signals using neural networks
- Speaker independent phoneme classification in continuous speech
- Joint application of feature extraction based on EMD-AR strategy and multi-class classifier based on LS-SVM in EMG motion classification
- Feature set extraction algorithm based on soft computing techniques and its application to EMG pattern classification
- EMG pattern classification using SOFMs for hand signal recognition
Learning and adaptive systems in artificial intelligence (68T05) Pattern recognition, speech recognition (68T10) Natural language processing (68T50)
Cites Work
This page was built for publication: A SPEECH SYNTHESIZER USING FACIAL EMG SIGNALS
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3631246)