首页> 外文期刊>Applied Soft Computing >Feedforward kernel neural networks, generalized least learning machine, and its deep learning with application to image classification
【24h】

Feedforward kernel neural networks, generalized least learning machine, and its deep learning with application to image classification

机译:前馈核神经网络,广义最小学习机及其深度学习及其在图像分类中的应用

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, the architecture of feedforward kernel neural networks (FKNN) is proposed, which can include a considerably large family of existing feedforward neural networks and hence can meet most practical requirements. Different from the common understanding of learning, it is revealed that when the number of the hidden nodes of every hidden layer and the type of the adopted kernel based activation functions are pre-fixed, a special kernel principal component analysis (KPCA) is always implicitly executed, which can result in the fact that all the hidden layers of such networks need not be tuned and their parameters can be randomly assigned and even may be independent of the training data. Therefore, the least learning machine (LLM) is extended into its generalized version in the sense of adopting much more error functions rather than mean squared error (MSE) function only. As an additional merit, it is also revealed that rigorous Mercer kernel condition is not required in FKNN networks. When the proposed architecture of FKNN networks is constructed in a layer-by-layer way, i.e., the number of the hidden nodes of every hidden layer may be determined only in terms of the extracted principal components after the explicit execution of a KPCA, we can develop FKNN's deep architecture such that its deep learning framework (DLF) has strong theoretical guarantee. Our experimental results about image classification manifest that the proposed FKNN's deep architecture and its DLF based learning indeed enhance the classification performance. (C) 2015 Elsevier B.V. All rights reserved.
机译:在本文中,提出了前馈核神经网络(FKNN)的体系结构,它可以包括相当大的现有前馈神经网络家族,因此可以满足大多数实际需求。与对学习的普遍理解不同,它揭示了当每个隐藏层的隐藏节点的数量和所采用的基于内核的激活函数的类型预先固定时,总是隐式地进行特殊的内核主成分分析(KPCA)这可能导致以下事实:不需要调整此类网络的所有隐藏层,并且可以随机分配其参数,甚至可以独立于训练数据。因此,从仅采用更多误差函数而不是仅采用均方误差(MSE)函数的意义上来说,最小学习机(LLM)扩展到了其通用版本。另一个优点是,还揭示出FKNN网络中不需要严格的Mercer内核条件。当所提出的FKNN网络架构以逐层方式构建时,即,在明确执行KPCA之后,仅可以根据提取的主成分来确定每个隐藏层的隐藏节点数,可以开发FKNN的深度架构,使其深度学习框架(DLF)具有强大的理论保证。我们关于图像分类的实验结果表明,所提出的FKNN的深度架构及其基于DLF的学习确实提高了分类性能。 (C)2015 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号