...
首页> 外文期刊>Cartography and geographic information science >Detecting dynamic visual attention in augmented reality aided navigation environment based on a multi-feature integration fully convolutional network
【24h】

Detecting dynamic visual attention in augmented reality aided navigation environment based on a multi-feature integration fully convolutional network

机译:Detecting dynamic visual attention in augmented reality aided navigation environment based on a multi-feature integration fully convolutional network

获取原文
获取原文并翻译 | 示例
           

摘要

Visual attention detection, as an important concept for human visual behavior research, has been widely studied. However, previous studies seldom considered the feature integration mechanism to detect visual attention and rarely considered the differences due to different geographical scenes. In this paper, we use an augmented reality aided (AR-aided) navigation experimental dataset to study human visual behavior in a dynamic AR-aided environment. Then, we propose a multi-feature integration fully convolutional network (M-FCN) based on a self-adaptive environment weight (SEW) to integrate RGB-D, semantic, optical flow and spatial neighborhood features to detect human visual attention. The result shows that the M-FCN performs better than other state-of-the-art saliency models. In addition, the introduction of feature integration mechanism and the SEW can improve the accuracy and robustness of visual attention detection. Meanwhile, we find that RGB-D and semantic features perform best in different road routes and road types, but with the increase in road type complexity, the expressiveness of these two features weakens, and the expressiveness of optical flow and spatial neighborhood features increases. The research is helpful for AR-device navigation tool design and urban spatial planning.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号