Creating a Virtual World and Navigating through It Using Gesture Recognition

Presentation at Elon Student Undergraduate Research Forum, Spring 2011

Dan Cresse  (Dr. Shanon Duvall) Department of Computing Sciences

The research addresses the issues and complexities surrounding the recognition of gestures and the application of said gestures into a virtual environment. Virtual environments are collections of images that provide a sense of immersion complemented by sensory stimuli. Gesture recognition is the process of interpreting a series of points on a screen as a meaningful gesture such as a letter, symbol, or direction. It uses mathematical algorithms and can focus on many different human gestures such as emotional gestures of the face, hand gestures, or body language.

Gesture recognition is a difficult field because, unlike pressing a button, gestures are very difficult to define. There are patterns in gestures but each one is different, just as each person‘s handwriting is unique. Gesture recognition is a relevant topic in the field of Computer Science as many major computer software and hardware companies have been taking steps towards motion controls and touch-screen based hardware and applications.

The project aims to determine the difficulties that arise when interpreting a 2D (2-dimensional) gesture into a 3D (3-dimensional space) as well as the computational complexities that develop when moving into 3D space. We will present our algorithm for mouse based gestures implemented with the XNA development platform to create a 3-dimensional virtual world in which the 2D gestures of the computer mouse are meaningful in a 3D space. In our talk we will present our gesture recognition system and explain the process and methodology that went into developing it as well as the overall complexity that arises when interpreting 2D gestures in 3D space.