首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Quantized Kernel Least Mean Square Algorithm
【24h】

Quantized Kernel Least Mean Square Algorithm

机译:量化内核最小均方算法

获取原文
获取原文并翻译 | 示例
       

摘要

In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the “redundant” data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
机译:在本文中,我们提出了一种量化方法,作为稀疏化的替代方法,以抑制核自适应滤波中径向基函数结构的增长。该方法背后的基本思想是量化并因此压缩输入(或特征)空间。与稀疏化不同,新方法使用“冗余”数据来更新最接近中心的系数。特别地,基于简单的在线矢量量化方法,开发了量化的核最小均方(QKLMS)算法。均方收敛的分析研究已经进行。建立了QKLMS的能量守恒关系,在此基础上我们得出了均方收敛的充分条件,以及稳态超均方误差理论值的上下界。给出了静态函数估计和短期混沌时间序列预测示例,以证明其出色的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号