首页> 外文会议>International Conference on Spoken Language Processing; 20041004-08; Jeju(KR) >Pinched Lattice Minimum Bayes Risk Discriminative Training for Large Vocabulary Continuous Speech Recognition
【24h】

Pinched Lattice Minimum Bayes Risk Discriminative Training for Large Vocabulary Continuous Speech Recognition

机译:大词汇量连续语音识别的收缩格最小贝叶斯风险判别训练

获取原文
获取原文并翻译 | 示例

摘要

Iterative estimation procedures that minimize empirical risk based on general loss functions such as the Leven-shtein distance have been derived as extensions of the Extended Baum Welch algorithm. While reducing expected loss on training data is a desirable training criterion, these algorithms can be difficult to apply. They are unlike MMI estimation in that they require an explicit listing of the hypotheses to be considered and in complex problems such lists tend to be prohibitively large. To overcome this difficulty, modeling techniques originally developed to improve search efficiency in Minimum Bayes Risk decoding can be used to transform these estimation algorithms so that exact update, risk minimization procedures can be used for complex recognition problems. Experimental results in two large vocabulary speech recognition tasks show improvements over conventionally trained MMIE models.
机译:作为扩展Baum Welch算法的扩展,已经得出了基于一般损失函数(例如Leven-shtein距离)的将经验风险最小化的迭代估计程序。虽然减少训练数据的预期损失是理想的训练准则,但这些算法可能难以应用。它们与MMI估计不同,它们需要明确考虑假设的清单,并且在复杂问题中,此类清单往往会过大。为了克服这一困难,最初用于提高最小贝叶斯风险解码中搜索效率的建模技术可用于转换这些估计算法,以便将准确的更新,风险最小化过程用于复杂的识别问题。在两个大词汇量语音识别任务中的实验结果表明,与常规训练的MMIE模型相比有所改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号