|dc.description.abstract||Autonomous mobile robots (ARM’s) are defined by their ability to perform some tasks independently from direct human interaction. However, interaction with humans at some point is necessary. There are some situations in which humans can give instructions to robots most effectively with physical gestures. Example include scenarios that require silence, such as covert military operations or hospital rooms, loud assemblies, underwater, or in space.
Artificial neural networks (ANN’s) have long been used for pattern recognition. If well- trained with a suitable dataset, an ANN can provide a satisfactory result for a complex task.
ROS (Robot Operating System) is a mainstream software framework for robotics research throughout the world. It can streamline the development of robots through code reuse.
This thesis describes a partial solution to help robots understand human gesture by combining an ANN and ROS, with a Kinect sensor as the primary input. Five gestures are trained, and then recognized when performed by people other than the trainer. The overall success rate of gesture recognition in this study is 80 percent. Some gestures are recognized with more than 90 percent success.||en_US