Effective human interaction is necessary for robots in social roles such as educational assistants, exercise coaches, and team members in collaborative work. However, such roles require robots to understand and respond appropriately to the behavioral, emotional, and mental states of their human counterparts. In particular, social robots using natural-language-based interactions with people must deal with the complexity of human language, ranging from low-level algorithmic processing details (e.g., the incrementality of information integration) to high-level pragmatic principles (e.g., etiquette for how to formulate and respond to social cues). Additional complications arise from other influences, including physical distance, social role, and the nature and purpose of the interaction. We approach natural language interaction between humans and robots by studying both interactants, improving communication by implementing mechanisms that integrate natural language generation with contextual information about the human interactants. Specifically, we present two systems that give social robots the capacity to (1) generate speech acts (based on the sociolinguistic underpinnings of human-human language interaction) and (2) understand contextual cues (based on the evaluation of both linguistic and socio-neurophysiological output of human interactants).
Human-robot interaction, natural language, neurophysiology, humanoids, politeness theory
Strait, M., Canning, C., and Scheutz, M.. (2014)
Investigating the effects of robot communication strategies in advice-giving situations based on robot appearance, interaction modality and distance.
Human-Robot Interaction (HRI). ACM. pp. 479-486.
Briggs, G., and Scheutz, M.. (2013)
A hybrid architectural approach to understanding and appropriately generating indirect speech acts.
AAAI. pp. 1213-1219.