首页> 中文期刊> 《中国邮电高校学报:英文版》 >Progressive framework for deep neural networks: from linear to non-linear

Progressive framework for deep neural networks: from linear to non-linear

         

摘要

We propose a novel progressive framework to optimize deep neural networks. The idea is to try to combine the stability of linear methods and the ability of learning complex and abstract internal representations of deep learning methods. We insert a linear loss layer between the input layer and the first hidden non-linear layer of a traditional deep model. The loss objective for optimization is a weighted sum of linear loss of the added new layer and non-linear loss of the last output layer. We modify the model structure of deep canonical correlation analysis(DCCA), i.e., adding a third semantic view to regularize text and image pairs and embedding the structure into our framework, for cross-modal retrieval tasks such as text-to-image search and image-to-text search. The experimental results show the performance of the modified model is better than similar state-of-art approaches on a dataset of National University of Singapore(NUS-WIDE). To validate the generalization ability of our framework, we apply our framework to Rank Net, a ranking model optimized by stochastic gradient descent. Our method outperforms Rank Net and converges more quickly, which indicates our progressive framework could provide a better and faster solution for deep neural networks.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号