首页> 外文会议>Annual Conference on Neural Information Processing Systems >Comparison of objective functions for estimating linear-nonlinear models
【24h】

Comparison of objective functions for estimating linear-nonlinear models

机译:估计线性-非线性模型的目标函数的比较

获取原文

摘要

This paper compares a family of methods for characterizing neural feature selectivity with natural stimuli in the framework of the linear-nonlinear model. In this model, the neural firing rate is a nonlinear function of a small number of relevant stimulus components. The relevant stimulus dimensions can be found by maximizing one of the family of objective functions, Renyi divergences of different orders [1, 2]. We show that maximizing one of them, Renyi divergence of order 2, is equivalent to least-square fitting of the linear-nonlinear model to neural data. Next, we derive reconstruction errors in relevant dimensions found by maximizing Renyi divergences of arbitrary order in the asymptotic limit of large spike numbers. We find that the smallest errors are obtained with Renyi divergence of order 1, also known as Kullback-Leibler divergence. This corresponds to finding relevant dimensions by maximizing mutual information [2]. We numerically test how these optimization schemes perform in the regime of low signal-to-noise ratio (small number of spikes and increasing neural noise) for model visual neurons. We find that optimization schemes based on either least square fitting or information maximization perform well even when number of spikes is small. Information maximization provides slightly, but significantly, better reconstructions than least square fitting. This makes the problem of finding relevant dimensions, together with the problem of lossy compression [3], one of examples where information-theoretic measures are no more data limited than those derived from least squares.
机译:本文在线性-非线性模型的框架内比较了一系列利用自然刺激表征神经特征选择性的方法。在该模型中,神经激发速率是少量相关刺激成分的非线性函数。通过最大化目标函数族之一,即不同阶次的人散度,可以找到相关的刺激维度[1、2]。我们表明,最大化其中之一的2阶仁义散度等效于线性非线性模型对神经数据的最小二乘拟合。接下来,我们通过在大尖峰数的渐近极限中最大化任意阶的仁义散度,得出相关尺寸的重构误差。我们发现最小误差是通过1阶Renyi散度获得的,也称为Kullback-Leibler散度。这对应于通过最大化互信息来找到相关维度[2]。我们数值测试这些优化方案如何在模型视觉神经元的低信噪比(尖峰数量少和神经噪声增加)的情况下执行。我们发现,即使尖峰数很小,基于最小二乘拟合或信息最大化的优化方案也能很好地执行。与最小二乘拟合相比,信息最大化可提供略微但明显更好的重构。这就带来了寻找相关维数的问题,以及有损压缩的问题[3],这是信息理论测度的数据不比从最小二乘法得到的数据受限制的例子之一。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号