...
首页> 外文期刊>Computer speech and language >Joint training of non-negative Tucker decomposition and discrete density hidden Markov models
【24h】

Joint training of non-negative Tucker decomposition and discrete density hidden Markov models

机译:非负Tucker分解和离散密度隐马尔可夫模型的联合训练

获取原文
获取原文并翻译 | 示例
           

摘要

Non-negative Tucker decomposition (NTD) is applied to unsupervised training of discrete density HMMs for the discovery of sequential patterns in data, for segmenting sequential data into patterns and for recognition of the discovered patterns in unseen data. Structure constraints are imposed on the NTD such that it shares its parameters with the HMM. Two training schemes are proposed: one uses NTD as a regularizer for the Baum-Welch (BW) training of the HMM, the other alternates between initializing the NTD with the BW output and vice versa. On the task of unsupervised spoken pattern discovery from the TIDIGITS database, both training schemes are observed to improve over BW training in terms of pattern purity, accuracy of the segmentation boundaries and accuracy for speech recognition. Furthermore, we experimentally observe that the alternative training of NTD and BW outperforms the NTD regularized BW, BW training and BW training with simulated annealing.
机译:非负Tucker分解(NTD)用于离散密度HMM的无监督训练,用于发现数据中的顺序模式,将顺序数据分段为模式以及识别看不见的数据中的发现模式。对NTD施加结构约束,以使其与HMM共享其参数。提出了两种训练方案:一种使用NTD作为HMM的Baum-Welch(BW)训练的正则化器,另一种在使用BW输出初始化NTD之间进行选择,反之亦然。在从TIDIGITS数据库进行无监督口语模式发现的任务上,观察到两种训练方案在模式纯度,分割边界的准确性和语音识别的准确性方面均优于BW训练。此外,我们通过实验观察到,NTD和BW的交替训练优于NTD正规化的BW,BW训练和带有模拟退火的BW训练。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号