首页> 外文会议>International Conference on Signal Processing(ICSP'06); 20061116-20; Guilin(CN) >Multi-Modal Human State Perception For Pervasive Computing
【24h】

Multi-Modal Human State Perception For Pervasive Computing

机译:普适计算的多模态人类状态感知

获取原文
获取原文并翻译 | 示例

摘要

Pervasive/ubiquitous computing is anywhere and anytime human-centered services based on human's needs accordingly. Multi-modal human state perception for necessary context awareness is a key issue to build such a "human-centered" pervasive computing environment. In this paper, we design and implement a platform for human visual and physiological perception which gives human identity facial expression and physiological information directly. By integrating and modeling the face and physiological state information, we can also have some possible deep emotion information. These contexts are used for further pervasive computing applications such as healthcare.
机译:普适计算/随处可见的计算技术可随时随地基于人类需求以人为本。建立必要的上下文感知的多模式人类状态感知是构建这种“以人为中心”的普适计算环境的关键问题。在本文中,我们设计并实现了一个用于人类视觉和生理感知的平台,该平台可直接提供人类身份的面部表情和生理信息。通过对面部和生理状态信息进行集成和建模,我们还可以获得一些可能的深度情感信息。这些上下文用于进一步普及的计算应用程序,例如医疗保健。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号