首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning
【2h】

An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning

机译:基于深度学习的盲人和聋人自动驾驶汽车的可视化与可视化系统

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user’s speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle’s display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance.
机译:当盲人和聋人是全自动驾驶的乘客时,应为聋人提供直观,准确的可视化屏幕,并应配备具有语音到文本(STT)和文本到语音(TTS)功能的音频系统。提供给盲人。然而,这些系统不能知道故障自诊断信息和指示行驶时车辆当前状态的仪表盘信息。本文提出了一种基于深度学习的盲人和聋人自动驾驶汽车的音频和可视化系统(AVS),以解决该问题。 AVS由三个模块组成。数据收集和管理模块(DCMM)存储和管理从车辆收集的数据。语音转换模块(ACM)具有识别用户语音并将其转换为文本数据的语音到文本子模块(STS),以及将文本数据转换为语音的文本到波子模块(TWS)。数据可视化模块(DVM)可视化收集的传感器数据,故障自我诊断数据等,并根据车辆显示屏的大小放置可视化数据。实验表明,在车载诊断(OBD)中调整可视化图形组件所需的时间比在云服务器中所需的时间快约2.5倍。此外,AVS系统的总体计算时间比现有的仪表组大约快2毫秒。因此,由于本文提出的AVS可以使盲人和聋哑人仅选择他们想听到和看到的东西,因此减少了变速箱的过载并大大提高了车辆的安全性。如果将AVS引入真实车辆,则可以提前防止残疾人和其他乘客发生事故。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号