首页> 外文期刊>Transportation research >Multi-scale driver behavior modeling based on deep spatial-temporal representation for intelligent vehicles
【24h】

Multi-scale driver behavior modeling based on deep spatial-temporal representation for intelligent vehicles

机译:基于智能车辆的深空间表示的多尺度驾驶员行为建模

获取原文
获取原文并翻译 | 示例
           

摘要

The mutual understanding between driver and vehicle is critical to the realization of intelligent vehicles and customized interaction interface. In this study, a unified driver behavior modeling system toward multi-scale behavior recognition is proposed to enhance the driver behavior reasoning ability for intelligent vehicles. Specifically, the driver behavior recognition system is designed to simultaneously recognize the driver's physical and mental states based on a deep encoder-decoder framework. The model jointly learns to recognize three driver behaviors with different time scales: mirror checking and facial expression state, and two mental behaviors, including intention and emotion. The encoder module is designed based on a deep convolutional neural network (CNN) to capture spatial information from the input video stream. Then, several decoders for different driver states estimation are proposed with fully-connected (FC) and long short-term memory (LSTM) based recurrent neural networks (RNN). Two naturalistic datasets are used in this study to investigate the model performance, which is a local highway dataset, namely, CranData, and one public dataset from Brain4Cars. Based on the spatial-temporal representation of driver physical behavior, it shows that the observed physical behaviors can be used to model the latent mental behaviors through the proposed end-to-end learning process. The testing results on these two datasets show state-of-the-art results on mirror checking behavior, intention, and emotion recognition. With the proposed system, intelligent vehicles can gain a holistic understanding of the driver's physical and phycological behaviors to better collaborate and interact with the human driver, and the driver behavior reasoning system helps to reduce the conflicts between the human and vehicle automation.
机译:驾驶员和车辆之间的相互理解对于实现智能车辆和定制交互界面至关重要。在本研究中,提出了一种统一的驾驶员行为建模系统,用于多尺度行为识别,以增强智能车辆的驾驶员行为推理能力。具体地,驾驶员行为识别系统被设计为基于深度编码器解码器框架同时识别驾驶员的物理和精神状态。该模型共同学会识别三个具有不同时间尺度的驾驶员行为:镜像检查和面部表情状态,以及两个心理行为,包括意图和情感。编码器模块基于深度卷积神经网络(CNN)设计,以捕获来自输入视频流的空间信息。然后,提出了具有完全连接的(FC)和基于长的短期存储器(LSTM)的经常性神经网络(RNN)的若干驱动器状态估计的若干解码器。本研究中使用了两个自然性数据集来调查模型性能,即本地高速公路数据集,即Crandata和Brain4Cars的一个公共数据集。基于驾驶员物理行为的空间 - 时间表示,它表明观察到的物理行为可用于通过所提出的端到端学习过程来模拟潜在心理行为。这两个数据集的测试结果显示了镜像检查行为,意图和情感识别的最先进结果。通过提出的系统,智能车辆可以为驾驶员的物理和植物学行为提供全面的理解,以更好地合作和与人类司机互动,以及驾驶员行为推理系统有助于减少人类和车辆自动化之间的冲突。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号