Please use this identifier to cite or link to this item:
http://hdl.handle.net/10266/3426
Title: | Real Time Hand Gesture Recognition Based Interface for Microsoft Word Document Handling |
Authors: | Yadav, Kapil |
Supervisor: | Bhattacharya, Jhilik |
Keywords: | Gesture;HCI;Microsoft Word;CSE;Computer Science |
Issue Date: | 28-Jul-2015 |
Abstract: | Development of new technologies for Man Machine Interface hardware is in tandem with the corresponding advancement of software algorithms for data interpretation and processing. Technology enhancement thus saw a leap from Swept Frequency Capacitive Sensing techniques used in conventional touch screens, to interactive hardware displays like large displays, flexible displays and wearable displays for mixed reality. Current applications of MMI include smart homes, collaborative working environments, advanced information visualization and many more. The interfacing parameters range from multimodal interaction like touch, speech and gesture, to physiological factors such as ECG, EOG, Heart Rate, Eye blinking, facial expression for example. Of these gesture based HCI is popular due to two main reasons. Primarily, sensor for gesture (camera) is easily available and attachable on laptops or desktops when compared to sensors for ECG, heart rate acquisitions. Also, they are easy to use and process. The thesis discusses a gesture driven Microsoft Word Document handling. This can be applied on other applications like powerpoint or PDF reader. The main idea is to control the document from a distance without using keyboard or mouse. This can be beneficial when (i) the user is at some distance from the system (ii) hands are dirty and the user does not want to touch the system (iii) mouse is temporarily out of order (iv) simply increasing functionality of the system and providing different interfaces like voice and gesture, besides touch. The system was developed using a two state discrete temporal model for gestures which works with distinct poses. The model fuses the state information along with individual pose recognition to activate the interfacing mechanism. It can be inferred from the experimental results that the model facilitates both accurate gesture recognition as well as promptness in response. Different feature extraction techniques like gabor, wavelet,SURF are tested on the system and a decision fusion approach for these features is proposed. Generally, gesture recognition involves a huge offline training dataset to make the system robust against skin color, illumination and hand pose changes. This thesis introduces a color and shape model such that no explicit training set to train the gesture database for realtime gesture recognition is required. The response time which varies between 2 to 2.5 second can be further improved by implementing the feature detection steps in VC++ environment instead of Matlab. |
Description: | M.E. (Software Engineering) |
URI: | http://hdl.handle.net/10266/3426 |
Appears in Collections: | Masters Theses@CSED |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.