Vision-Based Gestural Guidance Interface for Mobile Robotic Platforms
The system controls the motion of a mobile platform using a set of predefined static and dynamic hand gestures inspired by the marshalling code.
Images captured by an onboard color camera are processed at video rate in order to track the operator's head and hands. The camera pan, tilt and zoom are adjusted by a fuzzy-logic controller so as to track the operator's head and maintain it centered and properly sized within the image plane.
Gestural commands are defined as two-hand motion patterns, whose features are provided, at video rate, to a trained neural network. A command is considered recognized once the classifier has produced a series of consistent interpretations. A motion-modifying command is then issued in a way that ensures motion coherence and smoothness.
V. Paquin and P. Cohen, A Vision-Based Gestural Guidance Interface for Mobile Robotic Platforms, Proceedings of the International Workshop on Human-Computer Interaction, Prague, Czech Republic, May 16 2004 (pp. 39-47). Berlin: Springer Lecture Notes on Computer Science Series (LNCS 3058).