...
首页> 外文期刊>Ergonomics >A computer vision approach for classifying isometric grip force exertion levels
【24h】

A computer vision approach for classifying isometric grip force exertion levels

机译:一种用于分类等距抓地力耗力水平的计算机视觉方法

获取原文
获取原文并翻译 | 示例
           

摘要

Exposure to high and/or repetitive force exertions can lead to musculoskeletal injuries. However, measuring worker force exertion levels is challenging, and existing techniques can be intrusive, interfere with human-machine interface, and/or limited by subjectivity. In this work, computer vision techniques are developed to detect isometric grip exertions using facial videos and wearable photoplethysmogram. Eighteen participants (19-24 years) performed isometric grip exertions at varying levels of maximum voluntary contraction. Novel features that predict forces were identified and extracted from video and photoplethysmogram data. Two experiments with two (High/Low) and three (0%MVC/50%MVC/100%MVC) labels were performed to classify exertions. The Deep Neural Network classifier performed the best with 96% and 87% accuracy for two- and three-level classifications, respectively. This approach was robust to leave subjects out during cross-validation (86% accuracy when 3-subjects were left out) and robust to noise (i.e. 89% accuracy for correctly classifying talking activities as low force exertions). Practitioner summary: Forceful exertions are contributing factors to musculoskeletal injuries, yet it remains difficult to measure in work environments. This paper presents an approach to estimate force exertion levels, which is less distracting to workers, easier to implement by practitioners, and could potentially be used in a wide variety of workplaces.
机译:暴露于高和/或重复的力劳动可以导致肌肉骨骼损伤。然而,衡量工人力量耗费水平是挑战性,现有技术可以是侵扰的,干扰人机界面,和/或受主观性的限制。在这项工作中,开发了计算机视觉技术以检测使用面部视频和可穿戴的光学仪谱检测等距抓取施加。十八名参与者(19-24岁)以不同水平的最大自愿收缩进行等距抓斗举措。预测力的新特征是从视频和光学读数数据中识别和提取的。进行两个(高/低)和三(0%MVC / 50%MVC / 100%MVC)标记的两项实验以分类施用。深度神经网络分类器分别以分别和三级分类的分类为本最佳,精度为96%和87%。这种方法在交叉验证期间留出对象(当遗漏3个受试者时的86%的准确性)和稳健的噪声(即,正确拨出谈话的准确性为低力劳动时)。从业者摘要:有力的努力是肌肉骨骼伤害的因素,但在工作环境中仍难以衡量。本文提出了一种估计力量劳累水平的方法,这对工人不太分散注意力,更容易通过从业者实施,并且可能在各种各样的工作场所中使用。

著录项

  • 来源
    《Ergonomics》 |2020年第8期|1010-1026|共17页
  • 作者单位

    Purdue Univ Sch Ind Engn 315 N Grant St W Lafayette IN 47907 USA;

    Purdue Univ Sch Ind Engn 315 N Grant St W Lafayette IN 47907 USA;

    Purdue Univ Dept Comp Sci W Lafayette IN 47907 USA;

    Purdue Univ Sch Ind Engn 315 N Grant St W Lafayette IN 47907 USA|Purdue Univ Sch Elect & Comp Engn W Lafayette IN 47907 USA;

    Purdue Univ Sch Ind Engn 315 N Grant St W Lafayette IN 47907 USA;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);美国《化学文摘》(CA);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Computer vision; high force exertions; facial expressions; machine learning;

    机译:计算机视觉;高力劳累;面部表情;机器学习;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号