Implementing Gestures in Robots Assists with Communication

Enhanced human-robot communication is not such a distant possibility anymore. (Source: Pixa Bay)

Enhanced human-robot communication is not such a distant possibility anymore. (Source: Pixa Bay).

Researchers in London have discovered that they can improve humans’ communication with robots by including hand gestures in their programming.

Paul Bremner and Ute Leonards, research associates who work at the Bristol Robotics Laboratory at the University of The West of England, published a study examining the humanization of the robot-human experience on the hypothesis that multi-modal communication could facilitate the spread of information. The study investigated whether or not participants could detect some of the pre-established hand gestures given to actors that were copied by the robot.

The team developed a series of “iconic” gestures for the robot to execute, which possess a distinct meaning that could parallel speech or supplement it. They elaborate on this point, calling the relationship multi-modal, and stating “multi-modal communication can be said to be more effective and efficient at conveying information” (1).

In their study, the researchers had twenty-two participants view a series of actions, either executed through speech only, gestures only, or “iconic” gestures simultaneously with speech (1). The NAO robot had programming based on a tele-operating system in which motion capture technology allowed for the re-creation of movements established via software communication nodes. To perfect these gestures, Microsoft’s Kinect sensor was used to track the appropriate joint and hand motions in the robot (2).

To properly select the gestures to show participants, the authors ensured that each would be simple enough to recognize without corresponding speech, but could also be performed by a NAO robot with limited mobility. For example, the robot only possessed three fingers, fewer degrees of freedom in its wrist than a human, and a constricted range of elbow movement (1). Thus, the carefully-selected gestures had to exclude common hand gestures that require hand-shapes, to ease interpretability for participants.

Actors were filmed performing the same gestures and setups that had been designed for the robots, which was shown to participants. Five different versions of gestures and verbal-gestures, with each presentation capped at five seconds, were shown. Human performers had their faces covered to avoid a misinterpretation between facial and hand gestures.

Researchers found that the participants were able to identify the hand gestures performed by the robot with the same accuracy as with human actors. A McNemar test demonstrated that the gestures were identified more correctly with human performances than with the robot ones, but the disparity was not a significant difference.

Ultimately, the research opens up the possibility to further investigate how human apply multi-modal forms of communication on a habitual basis, especially for the growing robotics industry. This can serve as a useful study across various cultural platforms, as gestures vary and possess distinct meanings for everyday interactions.

 

 

References:

1. Bremner, P. and Leonards, U. (2016, February 17). Iconic Gestures for Robot Avatars, Recognition and Integration with Speech. Psychol., 7, 183. Doi: 10.3389/fpsyg.2016.00183.

2. Frontiers. (2016, April 4). Gestures improve communication, even with robots: Robot avatars programmed to talk with their hands to be understood better. ScienceDaily. Retrieved from www.sciencedaily.com/releases/2016/04/160404111255.htm

Leave a Reply

Your email address will not be published. Required fields are marked *