In physical human-robot collaboration, robots currently face a shortcoming due to their limitations in observing and adapting to human dynamics. This further results in an inefficient collaboration and unergonomic interaction. SWITCH will address this shortcoming by developing methods that can efficiently observe human dynamics in real-time and learn anticipatory models from demonstration. First, we will collect several datasets of force and motion capture data for a human-human standing up task. We will then develop models that can learn the behaviors of the two agents (assistant and assisted) in a probabilistic fashion. These models will be exploited for on-line control of robots with reactive and anticipative capabilities.
An effective human-robot physical collaboration requires robots to be aware of what the human partner is and will be doing, both in terms of motion and in terms of forces exchanged. Physical collaboration requires anticipation of both partners; anticipation of partners requires models and observation; and models and observation require new technologies. While current state-of-the-art technologies make it possible to estimate motion, on-line measuring and predicting the exchanged forces is an open and challenging problem. In SWITCH, we will exploit a fully sensorized environment and accompanying computational tools, enabling us to measure the interactive forces between robots, humans and environment.
We will concentrate on the specific task of assisting a person to stand up, by considering three scenarios of increasing complexity, from purely reactive behaviors to anticipative and personalized behaviors. One of the novelty of the approach is that learning will be achieved by switching the roles of the assistant agent and the assisted agent. We believe that introducing such strategy in learning from demonstration (LfD) will speed up the learning process by providing a richer set of demonstrations with personalization capability (with the caregiver providing appropriate demonstrations for the person to be assisted). Recordings of human-human, human-robot and robot-human behaviors will also allow us to collected a wider range of sensory information (force and motion). As an encoding strategy, we will consider a novel holistic approach to encode the behaviors of the two agents in a joint model, which will be exploited within regression and model predictive control strategies for reaction and anticipation of the agent behaviors.
As a more general perspective, the research proposed in SWITCH will develop technologies for assistive robots to coexist and physically interact with humans. These technologies will enable robots to be more aware of and care about their human partners, with potential impacts on assistance capabilities in healthcare and household environments. The long term goal is to enable humanoid robots to physically interact and efficiently work with humans. While humanoid robots are already capable of performing several dynamic tasks, it is clear that in human-robot physical collaboration, they have currently a blind spot. Their limitations in observing and adapting to a human dynamics lead to inefficient collaboration and interaction. The approaches and techniques that we propose to investigate in SWITCH will serve us to advance the current state of robot control and learning techniques in assistive humanoid robots to enable robust, goal-directed whole-body motion execution involving physical contacts with environment and humans.