首页> 外文会议>IEEE International Conference on Fuzzy Systems >Entropy as Temporal Information Density
【24h】

Entropy as Temporal Information Density

机译:熵作为时间信息密度

获取原文

摘要

Besides the spatial contents from current sensory inputs, the relevant contexts from past frames are very important for temporal processing tasks (e.g., speech recognition, video analysis, and natural language processing). Our Developmental Network (DN) has demonstrated the ability to learn any emergent Turing Machine (TM), it can learn feature patterns from current and attended past natural inputs as their states. We have shown the dense actions can serve as natural sources of contexts, and the DN can autonomously generate actions as contexts when dealing with sequences. In this work, we use entropy to define and measure the information density of the temporal sequences. We also introduce the "free of labeling" property, which can help DN deal with a large number of states emerging from dense contexts. We experimented with DN for phoneme recognition as the example of auditory modality, but the principles are modality independent. Our experimental results showed the denser contexts extracted from the sequences, the better DN can perform. With the quantization of information density by entropy, we have better understanding of how to provide the contexts when training DN. This work is an important step toward enabling machines to autonomously abstract concrete concepts from contexts through life-long development.
机译:除了当前感官输入的空间内容之外,过去帧的相关上下文对于时间处理任务(例如语音识别,视频分析和自然语言处理)也非常重要。我们的开发网络(DN)展示了学习任何新兴图灵机(TM)的能力,它可以从当前和过去自然输入的状态中学习特征模式。我们已经展示了密集的动作可以作为上下文的自然来源,并且DN在处理序列时可以自动生成动作作为上下文。在这项工作中,我们使用熵来定义和测量时间序列的信息密度。我们还引入了“无标签”属性,该属性可以帮助DN处理密集环境中出现的大量状态。我们尝试使用DN进行音素识别,作为听觉模态的示例,但是原理是与模态无关的。我们的实验结果表明,从序列中提取的上下文越密集,DN可以执行的效果越好。通过熵对信息密度的量化,我们对训练DN时如何提供上下文有了更好的理解。这项工作是使机器能够从上下文到终生发展自主抽象具体概念的重要一步。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号