首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing;ICASSP >A multichannel MMSE-based framework for joint blind source separation and noise reduction
【24h】

A multichannel MMSE-based framework for joint blind source separation and noise reduction

机译:基于多通道MMSE的联合盲源分离和降噪框架

获取原文

摘要

In this paper, we propose a new framework to separate multiple speech signals and reduce the additive acoustic noise using multiple microphones. In this framework, we start by formulating the minimum-mean-square error (MMSE) criterion to retrieve each of the desired speech signals from the observed mixtures of sounds and outline the importance of multi-speaker activity detection. The latter is modeled by introducing a latent variable whose posterior probability is computed via expectation maximization (EM) combining both the spatial and spectral cues of the multichannel speech observations. We experimentally demonstrate that the resulting joint blind source separation (BSS) and noise reduction solution performs remarkably well in reverberant and noisy environments.
机译:在本文中,我们提出了一个新的框架来分离多个语音信号,并使用多个麦克风来减少附加声噪声。在此框架中,我们首先制定最小均方误差(MMSE)标准,以从观察到的声音混合中检索每个所需的语音信号,并概述多扬声器活动检测的重要性。后者是通过引入一个潜在变量来建模的,该变量的后验概率是通过结合多通道语音观测的空间和频谱线索的期望最大化(EM)来计算的。我们通过实验证明,所得到的联合盲源分离(BSS)和降噪解决方案在混响和嘈杂的环境中表现出色。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号