Most of the human computer interaction interfaces that are designed today require explicit instructions from the user in the form of keyboard taps or mouse clicks. As the complexity of these devices increase, the sheer amount of such instructions can easily disrupt, distract and overwhelm users. A novel method to recognize hand gestures for human computer interaction, using computer vision and image processing techniques, is proposed in this paper. The proposed method can successfully replace such devices (e.g. keyboard or mouse) needed for interacting with a personal computer. The method uses a commercial depth + rgb camera called Senz3D, which is cheap and easy to buy as compared to other depth cameras. The proposed method works by analyzing 3D data in real time and uses a set of classification rules to classify the number of convexity defects into gesture classes. This results in real time performance and negates the requirement of any training data. The proposed method achieves commendable performance with very low processor utilization.
展开▼