...
首页> 外文期刊>IEEE Transactions on Pattern Analysis and Machine Intelligence >Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing--Application to Feedforward ConvNets
【24h】

Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing--Application to Feedforward ConvNets

机译:低速率编码和巧合处理从帧驱动到无帧事件驱动视觉系统的映射-在前馈ConvNets中的应用

获取原文
获取原文并翻译 | 示例
           

摘要

Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules.
机译:事件驱动的视觉传感器引起了许多不同研究社区的关注。它们以与传统视频系统完全不同的方式提供视觉信息,传统视频系统由以给定“帧速率”渲染的静止图像序列组成。事件驱动的视觉传感器从生物学中获得启发。每个像素在感觉到有意义的事情正在发生时发出一个事件(峰值),而没有任何帧的概念。一种特殊类型的事件驱动传感器是所谓的动态视觉传感器(DVS),其中每个像素都可以计算光线或“时间对比度”的相对变化。传感器输出包括连续的像素事件流,这些像素事件表示场景中的运动对象。像素事件相对于“真实性”具有微秒的延迟。这些事件可以通过级联的事件(卷积)处理器“在它们流动时”进行处理。结果,输入和输出事件流实际上在时间上是重合的,并且一旦传感器提供足够有意义的事件,就可以识别出对象。在本文中,我们提出了一种方法,用于从常规框架驱动表示形式中经过适当训练的神经网络映射到事件驱动表示形式。通过研究事件驱动的卷积神经网络(ConvNet)来说明该方法,该事件网络经过训练可以识别旋转的人体轮廓或高速扑克牌符号。事件驱动的ConvNet提供了从真实DVS摄像机获得的记录。事件驱动的ConvNet使用专用的事件驱动模拟器进行仿真,并且由多个事件驱动的处理模块组成,这些模块的特性可从单独制造的硬件模块获得。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号