- This event has passed.
On the Role of Structure in Learning for Robot Manipulation
September 20, 2018 @ 6:00 pm - 8:00 pm PDT
Speaker: Jeannette Bohg, Stanford University
Abstract: Recent approaches in robotics follow the insight that perception is facilitated by interaction with the environment. First, this creates a rich sensory signal that would otherwise not be present. Second, knowledge of the sensory dynamics upon interaction allows prediction and decision-making over a longer time horizon. To exploit these benefits of Interactive Perception for capable robotic manipulation, a robot requires both: methods for processing rich, sensory feedback and feedforward predictors of the effect of physical interaction. In the first part of this talk, I will present a method for motion-based segmentation of an unknown number of simultaneously moving objects. The underlying model estimates dense, per-pixel scene flow that is then followed by clustering in motion trajectory space. We show how this outperforms state-of-the-art in scene flow estimation and multi-object segmentation. In the second part, I will present a method for predicting the effect of physical interaction with objects in the environment. The underlying model combines an analytical physics model and a learned perception part. In extensive experiments, we show how this hybrid model outperforms purely learned models in terms of generalisation. In both projects, we found that introducing structure greatly reduces training data, eases learning and provides extrapolation. Based on these findings, I will discuss the role of structure in learning for robot manipulation.
Biography: Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at MPI until September 2017 and remains affiliated as a guest researcher. Her research focuses on perception for autonomous robotic manipulation and grasping. She is specifically interesting in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Before joining the Autonomous Motion lab in January 2012, Jeannette Bohg was a PhD student at the Computer Vision and Active Perception lab (CVAP) at KTH in Stockholm. Her thesis on Multi-modal scene understanding for Robotic Grasping was performed under the supervision of Prof. Danica Kragic. She studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively.