首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops >Transfer Learning for Classifying Single Hand Gestures on Comprehensive Bharatanatyam Mudra Dataset
【24h】

Transfer Learning for Classifying Single Hand Gestures on Comprehensive Bharatanatyam Mudra Dataset

机译:转移学习在综合Bharatanatyam Mudra DataSet上对单手手势进行分类

获取原文

摘要

For any dance form, either classical or folk, visual expressions - facial expressions and hand gestures play a key role in conveying the storyline of the accompanied music to the audience. Bharatanatyam – a classical dance form which has origins from the southern states of India, is on the verge of being completely automated partly due to an acute dearth of qualified and dedicated teachers/gurus. In an honest effort to speed up this automation process and at the same time preserve the cultural heritage, we have chosen to identify and classify the single hand gestures/mudras/hastas against their true labels by using two variations of the convolutional neural networks (CNNs) that demonstrates the exceeding effectiveness of transfer learning irrespective of the domain difference between the pre-training and the training dataset. This work is primarily aimed at 1) building a novel dataset of 2D single hand gestures belonging to 27 classes that were collected from Google search engine (Google images), YouTube videos (dynamic and with background considered) and professional artists under staged environment constraints (plain backgrounds), 2) exploring the effectiveness of Convolutional Neural Networks in identifying and classifying the single hand gestures by optimizing the hyperparameters, and 3) evaluating the impacts of transfer learning and double transfer learning, which is a novel concept explored in this paper for achieving higher classification accuracy.
机译:对于任何舞蹈形式,古典或民间,视觉表达 - 面部表情和手势都在向观众传达陪同音乐的故事情节来发挥关键作用。 Bharatanatyam - 一种古典舞蹈形式,它来自印度南部的州,是由于有资格和专门的教师/大师的急性缺乏症,部分地自动化。在诚实努力加速这一自动化过程,同时保留文化遗​​产,我们选择通过使用卷积神经网络的两个变体来识别和分类单手势/ mudras / hastas(CNNS )通过预训练和训练数据集之间的领域差异,表明转移学习的超出有效性。这项工作主要针对1)构建属于27个类的2D单手手势的新型数据集,该数据集由Google搜索引擎(Google Images),YouTube视频(动态和背景所考虑的)和专业艺术家在阶段环境限制下收集(普通的背景),2)探索卷积神经网络的识别和通过优化超参数的单一手势分类的有效性,和3)评价转印学习和双转移学习,这是一种新的概念在本文中用于探索的影响实现更高的分类准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号