social linkedin box blue 32
social facebook box blue 32
social twitter box blue 32
social facebook box blue 32

iit-advr-logo-v3

Events ■ IROS 2012 Workshop

IROS'2012 Workshop on Learning and Interaction in Haptic Robots

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),
Friday October 12, 2012, Vilamoura, Algarve, Portugal (Conference Room "Vega")

Organizers:

Motivation and Objectives:

Research on robot learning from demonstration has received great attention in the last decade since it can serve a useful methodology for intuitive robot programming, even by general users without robotics expertise. In the pioneering research investigations of this field, demonstrations were provided either by teleoperating the robot, or by vision/motion sensors recordings of the user doing the task. The recent hardware and software developments towards compliant and tactile robots are changing this picture. New solutions are now being offered to the user to physically interact with the robot to transfer or refine skills. This full day workshop will focus on the new robot learning perspectives that such new interaction modality offers.

Physical interaction in the context of robot learning is a young but promising upcoming research topic. It provides a natural interface to kinesthetic transfer of skills to the robot, where the user can demonstrate or refine the task in the robot's environment while feeling its capabilities and limitations. With the new development of compliant controllers, backdrivable motors and artificial skins, new perspectives in learning arose by exploiting the natural teaching propensity of the user, already being familiar with social interaction such as scaffolding, molding or kinesthetic teaching. In this workshop, experts in machine learning, physical human robot interaction, and compliant robot control will introduce their research along this direction, sharing their views from different research perspectives, and discussing new challenges in this uprising field. Topics such as skill transfer, kinesthetic teaching interfaces, learning and prediction, compliant robot control, safe physical interaction, haptic and tactile guidance will be covered.

Topics:

  • Skill transfer
  • Kinesthetic teaching
  • Haptic/tactile guidance
  • Learning interaction/impedance control
  • Learning by imitation
  • Physical human robot interaction
  • Machine learning for robot control, adaptive robot control
  • Biological inspired principles for learning interaction

Invited speakers:

(click to display presentation abstracts)

Heni Ben Amor (Technische Universitaet Darmstadt, Germany)

In this talk I will present a set of methods that allow humans to provide robots with physical support during skills acquisition. Using the "Kinesthetic Bootstrapping" method, a human teacher can instruct a robot by manually moving all of the robot's joints to postures that approximate the intended movement. Then, an automatic optimization phase takes place during which the robot learns a motor skill that still resembles but also compensates for likely imperfections of the demonstrated movement. I will present an extension of the approach, called "Physical Interaction Learning", which improves the cooperation of humans and robots while they are working together to achieve a common goal. Especially in cooperative human-robot scenarios, robots need to be able to adapt their behavior rapidly to the feedback of the human partner. To this end, I will present an online learning approach for human-in-the-loop learning scenarios. Results on state-of-the-art android robots will also be presented.

Link to presenter's website

Aude Billard (EPFL, Switzerland)

Learning what is relevant of force and position control and when to apply which
Task are rarely composed of purely force control or position control. They often require a mix of the two. Knowing when to apply which is not easy to handcraft for the designer. This talk will present early work of ours at discovering this automatically from kinesthetic demonstrations.

Link to presenter's website

Etienne Burdet (Imperial College London, UK)

Human-like learning in industrial robots
This talk will present new ramifications of a story that started ten years ago with pure neuroscience investigations, and is now burgeoning with applications in interaction control for robots. By investigating the adaptation of arm movements to unstable dynamics typical of tool use, we could observe how the human central nervous system learns to coordinate muscles to deal with unknown interactions. The computational model of this learning can predict how sensory information is used to modify muscles activation during the whole learning process, and led to a novel robot controller able to interact with unknown environments and humans. Our recent applications demonstrate its versatility in various interactions tasks such as drilling, cutting and polishing, and its superior performances relative to traditional controllers.

Biographic sketch:
Dr. Etienne Burdet is Reader in the Department of Bioengineering at Imperial College. He has obtained a M.S. in Mathematics in 1990, a M.S. in Physics in 1991, and a Ph.D. in Robotics in 1996, all from ETH-Zurich. He was a postdoctoral fellow with TE Milner (McGill, Canada), JE Colgate (Northwestern U, USA). Dr. Burdet is doing research at the interface of robotics and bioengineering, and his main interest in human machine interaction. With his group, he uses an integrative approach of neuroscience and robotics, to i) investigate human motor control, and ii) design efficient assistive devices and virtual reality based training systems for neuro-rehabilitation and robot-aided surgery.

Link to presenter's website

Abderrahmane Kheddar, Andre Crosnier (CNRS-AIST JRL, Japan / LIRMM, France)

Human-humanoid jointly transporting a beam: a haptic joint action case study
This talk reports on our recent developments in programming a humanoid robot (HRP-2) in jointly performing a beam transportation task with a human partner. At first, we recall and assess through human-human studies that walking and manipulation are decoupled in a beam transportation task monitoring in human dyad. Therefore, reactive walking patterns can be made independently from manipulation tasks. Then, in order for the robot to be proactive, it must anticipate human motion. Previous studies by colleagues show that having a guess on the trajectories induced in a given task, makes it possible to program anticipatory motions based on these trajectories. It is however difficult to have such trajectories because they require knowing reliable invariants. However, from human-human dyad observation we can extract task templates in terms of motion and even discreet events. We can use machine learning or any other techniques. Once these task templates are obtained, we can use them to anticipate on human motion and achieve proactive behaviors. The template task is implemented together with a finite state machine that switch between different primitives of the walking pattern generator. We show that the robot achieves the transportation of the beam together with a human operator without the change of the controller in a follower or a master mode.

Biographic sketch:
Abderrahmane KHEDDAR received the ingénieur degree from the Institut National d’Informatique (INI), Algiers, the DEA (Master by research) and Ph.D. degree in robotics, both from the University of Paris 6. He is presently Directeur de Recherche at CNRS and the Director of the CNRS-AIST Joint Robotic Laboratory (JRL), UMI3218/CRT, Tsukuba, Japan. He is also leading the “Interactive Digital Humans” IDH team at CNRS-UM2 LIRMM at Montpellier, France. His research interests include haptics, humanoids and recently brain machine interfaces. He is a founding member and a senior advisor of the IEEE/RAS chapter on haptics and is with the editorial board of the IEEE Transactions on Robotics and the Journal of Intelligent and Robotic Systems; he is a founding member of the IEEE Transactions on Haptics and served in its editorial board during three years (2007-2010). He chaired the 2006 EuroHaptics conference for its first edition in France, and organized several workshops in major robotic conferences.

André CROSNIER received the Agrégation, in Electrical Engineering, from the Ecole Normale Supérieure de Cachan and the Ph.D. degree in Robotics, from the University of Montpellier. He is currently Professor at the University of Montpellier and research staff member of the “Interactive Digital Humans” IDH team at UM2-CNRS LIRMM at Montpellier, France. His research interest includes vision-based modeling, haptics and physical human-humanoid interaction.

Link to Abderrahmane Kheddar's website

Link to Andre Crosnier's website

Kazuhiro Kosuge (Tohoku University, Japan)

Coupled Dynamics in Physical Human-Robot Interaction During a Ball Room Dance

Dancing Waltz involves a physical interaction between a male dancer and a female dancer in a well-structured situation. During the Waltz, the male dancer leads the dance and the female dancer reads the lead to coordinate her motion with the male dancer's. Waltz offers us a good start point for exploring physical human-robot interaction (pHRI). Our goal is to reproduce female dancer's capabilities during the dance on a dance partner robot. We focus on the lower level interaction, i.e., "coupled dynamics". This presentation focuses on the lower level interaction and covers the modeling, human state sensing and robot control in developing a cooperative female robot dancer. We model the human and the robot follower as two physically connected inverted pendulums. The human state is measured by two laser range sensors, while the measurement noise and bias are corrected by a Kalman filter. Several candidate robot controllers are discussed and evaluated in experiments.

Link to presenter's website

Fabio Dalla Libera (Osaka University, Japan)

Interpretation of spontaneously provided tactile instructions
Touch is an important means for communication among humans. Sport instructors or dance teachers often use touch to adjust students' postures in a very intuitive way. Using tactile instructions appears thus to be a very appealing modality for developing humanoid robot motions as well. Interpretation of tactile instructions spontaneously provided by inexpert users reveals itself to be a complex task for artificial systems. The mapping between tactile instructions and motion modifications is non-linear, user dependent and context dependent. A proof of concept system for robot motion creation based on the learning of the meaning of tactile instructions will be introduced. The system is interesting for two reasons. Firstly, it shows the feasibility of using tactile instructions for motion development. Secondly, it can be used as a tool for studying the way humans intuitively use touch to communicate. This, in turn, will allow the development of better algorithms for the prediction of the meaning of tactile instructions. Results of pilot experiments are discussed, and a first set of features of tactile communication, yielded by the analysis of the data collected, is identified.

Link to presenter's website

Yoshihiko Nakamura (University of Tokyo, Japan)

Title: Actuator and Human Model for Human Robot Interaction
Abstract: Physical interaction between human and robot is the critical problem of robotic applications. There are many key issues to be solved including and not limitted to: development of force-sensitive actuators, sensory-motor model of humans, stability control of human-robot coupled systems, motion patterns generation for human-robot interaction, protocol and its control in human-robot interaction.
This talk will cover on the development of force sensitive actuators for human-robot interaction and the development of musculoskeletal modeling of human body for the study of human-robot interaction.

Link to presenter's website

Pierre-Yves Oudeyer (INRIA Bordeaux, France)

A Robotic Platform for Scalable Life-Long Learning Experiments
A grand goal of developmental robotics is to build robots capable to learn continuously novel sensorimotor and social skills over extended periods of time, i.e. months and years. This implies a huge methodological challenge: the techniques elaborated for this aim should be evaluated on real life-long experiments. Yet, so far the vast majority of robot learning experiments have been limited to a few hours, if not a few minutes. One of the reasons is that no experimental robotic platforms allowed for such long experiments. Ideally, such a platform should be robust and reconfigurable. But because it should allow for little constrained exploration and physical interaction with humans, it should be both safe (one can expect the learning robot to try wild movement when interacting with humans), cheap and easy to repair (breaking is unavoidable). Industrial robots are robust, but they are not often reconfigurable and too dangerous to allow unconstrained exploration. Recent industrial quality soft compliant robots are still too brittle and expensive for such experiments. On the contrary, most low cost platforms are not robust enough, and do not provide the required compliance properties. In this talk, I will present a novel experimental platform which breaks this experimental barrier. It integrates and uses in an original manner various off-the-shelf hardware and software components, allowing for robust, precise, cheap, compliant easily repairable robots. I will illustrate two instantiations of this platform: 1) the Acroban humanoid robot, allowing for whole-body intuitive physical interaction with human children (http://flowers.inria.fr/acroban.php); 2) The Ergo-Robot experiment, which has been running curiosity-driven learning and human-robot interaction algorithms continuously for 5 months in a public exhibition space (http://flowers.inria.fr/ergo-robots.php).

Link to presenter's website

Jan Peters (Technische Universitaet Darmstadt, Germany)

Machine Learning of Motor Skills for Robotics
Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands. We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent "hyperparameters" of these motor primitives allows learning complex tasks. We discuss task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm. These robot motor skills range from toy examples (e.g., paddling a ball, ball-in-a-cup) to playing robot table tennis against a human being.

Link to presenter's website

Andrea Thomaz (Georgia Institute of Technology, USA)

An HRI perspective for kinesthetic teaching
In this talk I will give an overview of some of our recent work on robots learning skills via kinesthetic teaching from naive teachers. In an initial study we found that the traditional input of full skill trajectories are not ideal for naive users. In some types of skills it is easier to provide sparse keyframes representing the skill and learn from those. Additionally, the benefits of both inputs can be combined with a hybrid demonstration approach. We present results from user studies as well as a new algorithm, Keyframe-based Learning from Demonstration, that handles both full and sparse trajectories as learning input.

Link to presenter's website

Sethu Vijayakumar (University of Edinburgh, UK)

Title: Exploiting Natural Dynamics through Optimal Variable Impedance Policies
Abstract: Variable Impedance modulation allows additional degrees of redundancy in planning and executing dynamic tasks. However, there are interesting machine learning challenges in computing optimal policies to exploit the natural dynamics -- this includes finding the right representation of the dynamics, learning and adapting plant models from data and of course, spatio-temporal optimization of this (expanded) variable impedance policy space. I will demonstrate our learning and optimization framework and some recent results on it's application to point-to-point tasks, throwing, brachiation and VIA bipedal walking.

Bio:
Sethu Vijayakumar is a Professor of Robotics at the University of Edinburgh, where he directs the Institute for Perception, Action and Behaviour within the School of Informatics. Sethu received his PhD from Tokyo, did postdoctoral work at the RIKEN Brain Science Institute, Japan and was a research faculty at USC, Los Angeles before moving to Edinburgh in 2003. His research interests span machine learning, robotics, computational motor control and optimisation techniques. He holds the prestigious Microsoft Research - Royal Academy of Engineering Senior Research Fellowship in Learning Robotics.

Link to presenter's website

Posters presentation:

(click to view abstracts as PDF)
Moertl

Alexander Moertl, Martin Lawitzky, Jose Ramon Medina, Dongheui Lee and Sandra Hirche (2012) "Design Concepts for Physical Human-Robot Cooperation".

Cruz

Pedro Cruz, Vítor Santos and Filipe Silva (2012) "Tele-Kinesthetic Teaching of a Humanoid Robot with Haptic Data Acquisition".

Kronander

Klas Kronander and Aude Billard (2012) "Learning Joint Stiffness Variations from Demonstration".

Pardo

Diego Pardo, Leonel Rozo, Guillem Alenya and Carme Torras (2012) "Dynamically Consistent Probabilistic Model for Robot Motion Learning".

Dumora

Julie Dumora, Franck Geffard, Catherine Bidard and Philippe Fraisse (2012) "Towards a new approach of haptic human-robot interaction for collaborative manipulation".

Li

Min Li, Lakmal D. Seneviratne, Proka Dasgupta and Kaspar Althoefer (2012) "Virtual Palpation System".

Erden

Mustafa Suphi Erden (2012) "Physical Human-Robot Interaction without Force Sensors".

Programme:

08:30-09:00 Introduction by the organizers
09:00-09:30 Aude Billard (EPFL, Switzerland)
09:30-10:00 Jan Peters (Technische Universitaet Darmstadt, Germany)
10:00-10:30 Andrea Thomaz (Georgia Institute of Technology, USA)
10:30-11:15 Posters session with coffee break
11:15-11:45 Yoshihiko Nakamura (University of Tokyo, Japan)
11:45-12:15 Pierre-Yves Oudeyer (INRIA Bordeaux, France)
12:15-14:00 Lunch
14:00-14:30 Etienne Burdet (Imperial College London, UK)
14:30-15:00 Sethu Vijayakumar (University of Edinburgh, UK)
15:00-15:30 Abderrahmane Kheddar, Andre Crosnier (CNRS-AIST JRL, Japan / LIRMM, France)
15:30-16:00 Fabio Dalla Libera (Osaka University, Japan)
16:00-16:30 Posters session with coffee break
16:30-17:00 Kazuhiro Kosuge (Tohoku University, Japan)
17:00-17:30 Heni Ben Amor (Technische Universitaet Darmstadt, Germany)
17:30-18:00 Discussion and wrap-up


Support:

This workshop is supported by the FP7 European Project SAPHARI "Safe and Autonomous Physical Human-Aware Robot Interaction".

Last Updated on Friday, 17 January 2014 16:11

INFORMATION NOTICE ON COOKIES

IIT's website uses the following types of cookies: browsing/session, analytics, functional and third party cookies. Users can choose whether or not to accept the use of cookies and access the website.
By clicking on further information, the full information notice on the types of cookies used will be displayed and you will be able to choose whether or not to accept cookies whilst browsing on the website.

Try our new site and tell us what you think
Take me there