首页> 外文期刊>IEEE transactions on industrial informatics >Cross-Modal Surface Material Retrieval Using Discriminant Adversarial Learning
【24h】

Cross-Modal Surface Material Retrieval Using Discriminant Adversarial Learning

机译:使用判别对抗学习进行跨模态表面材料检索

获取原文
获取原文并翻译 | 示例
           

摘要

The surface properties of an object play a vital role in the tasks of robotic manipulation or interaction with its surrounding environment. Tactile sensing can provide rich information about the surface properties of an object through physical contact. Hence, how to convey and interpret the tactile information to the user is a significant problem during the human-machine interaction. To this end, a visual-tactile cross-modal retrieval framework is proposed for perceptual estimation by associating tactile information to visual information of material surfaces. Namely, we can use tactile information of an unknown material surface to retrieve perceptually similar surfaces from an available surface visual sample set. For the proposed framework, we develop a discriminant adversarial learning method, which incorporates intramodal discriminant, cross-modal correlation, and intermodal consistency into a deep learning network for common feature representation learning. Experimental results on the publicly available data set show that the proposed framework and the method are effective.
机译:对象的表面特性在机器人操纵或与其周围环境的交互作用中起着至关重要的作用。触觉可通过物理接触提供有关对象表面特性的丰富信息。因此,在人机交互过程中,如何向用户传达和解释触觉信息是一个重大问题。为此,提出了一种视觉-触觉跨模态检索框架,用于通过将触觉信息与材料表面的视觉信息相关联来进行感知估计。即,我们可以使用未知材料表面的触觉信息从可用的表面视觉样本集中检索在感知上相似的表面。对于提出的框架,我们开发了一种判别对抗学习方法,该方法将模态内判别,跨模态相关和模态一致性纳入到用于共同特征表示学习的深度学习网络中。在公开数据集上的实验结果表明,该框架和方法是有效的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号