Promising Accurate Prefix Boosting for sequence-to-sequence ASR
November 07, 2018 Β· Declared Dead Β· π IEEE International Conference on Acoustics, Speech, and Signal Processing
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Murali Karthick Baskar, LukΓ‘Ε‘ Burget, Shinji Watanabe, Martin KarafiΓ‘t, Takaaki Hori, Jan Honza ΔernockΓ½
arXiv ID
1811.02770
Category
eess.AS: Audio & Speech
Cross-listed
cs.CL,
cs.LG,
cs.SD
Citations
16
Venue
IEEE International Conference on Acoustics, Speech, and Signal Processing
Last Checked
2 months ago
Abstract
In this paper, we present promising accurate prefix boosting (PAPB), a discriminative training technique for attention based sequence-to-sequence (seq2seq) ASR. PAPB is devised to unify the training and testing scheme in an effective manner. The training procedure involves maximizing the score of each partial correct sequence obtained during beam search compared to other hypotheses. The training objective also includes minimization of token (character) error rate. PAPB shows its efficacy by achieving 10.8\% and 3.8\% WER with and without RNNLM respectively on Wall Street Journal dataset.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Audio & Speech
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition
R.I.P.
π»
Ghosted
DiffWave: A Versatile Diffusion Model for Audio Synthesis
R.I.P.
π»
Ghosted
FastSpeech 2: Fast and High-Quality End-to-End Text to Speech
R.I.P.
π»
Ghosted
MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis
R.I.P.
π»
Ghosted
Generalized End-to-End Loss for Speaker Verification
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted