首页> 外文期刊>IEEE multimedia >An End-to-End Framework for Clothing Collocation Based on Semantic Feature Fusion
【24h】

An End-to-End Framework for Clothing Collocation Based on Semantic Feature Fusion

机译:An End-to-End Framework for Clothing Collocation Based on Semantic Feature Fusion

获取原文
获取原文并翻译 | 示例
           

摘要

In this article, we develop an end-to-end clothing collocation learning framework based on a bidirectional long short-term memories (Bi-LTSM) model, and propose new feature extraction and fusion modules. The feature extraction module uses Inception V3 to extract low-level feature information and the segmentation branches of Mask Region Convolutional Neural Network (RCNN) to extract high-level semantic information; whereas the feature fusion module creates a new reference vector for each image to fuse the two types of image feature information. As a result, the feature can involve both low-level image and high-level semantic feature information, so that the performance of Bi-LSTM can be enhanced. Extensive simulations are conducted based on Ployvore and DeepFashion2 datasets. Simulation results verify the effectiveness of the proposed method compared with other state-of-the-art clothing collocation methods.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号