首页> 美国卫生研究院文献>Frontiers in Neuroengineering >Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain–Computer Interface Feature Extraction
【2h】

Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain–Computer Interface Feature Extraction

机译:使用图形处理单元的大规模并行信号处理用于实时脑机接口特征提取

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain–computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix–matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.
机译:在过去的5年中,现代计算机处理器的时钟速度几乎稳定。因此,依赖于在短时间内处理大量数据的神经修复系统面临瓶颈,因为可能无法处理从具有高通道数和带宽的电极阵列记录的所有数据,例如皮层脑电图网格或其他可植入系统。因此,在这项研究中,开发了一种使用图形卡[图形处理单元(GPU)]的处理能力的方法,用于脑机接口(BCI)的实时神经信号处理。 NVIDIA CUDA系统用于将处理任务卸载到GPU,该GPU能够并行运行许多操作,从而有可能极大地提高现有算法的速度。 BCI系统记录了许多数据通道,这些通道被处理并转换为控制信号,例如计算机光标的移动。该信号处理链包括计算矩阵与矩阵的乘积(即空间滤波器),然后使用自回归方法计算每个通道上的功率谱密度,最后对控制的适当特征进行分类。在本研究中,前两个计算密集型步骤是在GPU上实现的,并且将速度与当前实现和使用多线程的基于中央处理器的实现进行了比较。通过GPU处理获得了显着的性能提升:当前的实现在933µms内处理了250个ms的1000个通道,而新的GPU方法仅花费了27µms,提高了近35倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号