首页> 美国政府科技报告 >Self-Organizing Neural Network Architecture for Auditory and Speech Perceptionwith Applications to Acoustic and other Temporal Prediction Problems
【24h】

Self-Organizing Neural Network Architecture for Auditory and Speech Perceptionwith Applications to Acoustic and other Temporal Prediction Problems

机译:用于听觉和语音感知的自组织神经网络结构及其在声学和其他时间预测问题中的应用

获取原文

摘要

This project is developing autonomous neural network models for the real-timeperception and production of acoustic and speech signals. The models have disclosed a common mechanism of nonlinear resonance that attentively reorganizes and groups acoustic data while suppressing unexpected noise. The SPINET pitch model transforms acoustic input into a spatial map of pitch whose properties simulate the key pitch data. SPINET was embedded into an ARTSTREAM model for auditory scene analysis that separates multiple sound sources from each other. The model groups frequency components based on pitch and spatial location cues into different streams. The model simulates psychophysical grouping data, such as frequency grouping across noise or ear of origin. These resonant streams input to an ARTPHONE model for variable-rate speech categorization. Computer simulations quantitatively generate experimentally observed category boundary shifts for VC-CV pairs, including why the interval to hear a double (VC1-C%V) stop is 150 msec longer than that to hear two different stops (VC1-C2V). This model uses resonant feedback between list categories and an automatically gain-controlled working memory. (AN).

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号