首页> 外文期刊>Computer speech and language >Robust discriminative training against data insufficiency in PLDA-based speaker verification
【24h】

Robust discriminative training against data insufficiency in PLDA-based speaker verification

机译:针对基于PLDA的说话者验证中数据不足的强大区分培训

获取原文
获取原文并翻译 | 示例
           

摘要

Probabilistic linear discriminant analysis (PLDA) with i-vectors as features has become one of the state-of-the-art methods in speaker verification. Discriminative training (DT) has proven to be effective for improving PLDA's performance but suffers more from data insufficiency than generative training (GT). In this paper, we achieve robustness against data insufficiency in DT in two ways. First, we compensate for statistical dependencies in the training data by adjusting the weights of the training trials in order for the training loss to be an accurate estimate of the expected loss. Second, we propose three constrained DT schemes, among which the best was a discriminatively trained transformation of the PLDA score function having four parameters. Experiments on the male telephone part of the NIST SRE 2010 confirmed the effectiveness of our proposed techniques. For various number of training speakers, the combination of weight-adjustment and the constrained DT scheme gave between 7% and 19% relative improvements inC_(Ilr) over GT followed by score calibration. Compared to another baseline, DT of all the parameters of the PLDA score function, the improvements were larger.
机译:以i-vectors为特征的概率线性判别分析(PLDA)已成为说话人验证中最先进的方法之一。事实证明,判别训练(DT)可有效提高PLDA的性能,但与生成训练(GT)相比,数据不足会带来更多的痛苦。在本文中,我们通过两种方式实现了针对DT中数据不足的鲁棒性。首先,我们通过调整训练试验的权重来补偿训练数据中的统计依赖性,以使训练损失能够准确估算出预期损失。其次,我们提出了三种约束的DT方案,其中最好的是对具有四个参数的PLDA评分函数进行判别训练的变换。在NIST SRE 2010的男性电话部分进行的实验证实了我们提出的技术的有效性。对于各种数量的培训说话者,权重调整和约束DT方案的结合使C_(Ilr)相对GT有了7%至19%的相对改进,然后进行了分数校准。与另一个基线PLDA评分函数的所有参数的DT相比,改进幅度更大。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号