Learning from Demonstrations for high level robotics tasks


Organizers: Ankur Handa, Feryal Behbahani, Arunkumar Byravan, James Davidson and Dieter Fox

Website: https://sites.google.com/view/learningfromdemonstrations/

Many real-world tasks require robots to solve complex decision making problems and be capable of dexterous low level control to enable seamless interaction with the surrounding environment. Learning from Demonstrations (LfD) can greatly reduce the difficulty of learning in such settings by making use of expert demonstrations. These demonstrations convey near-optimal behaviours for these tasks and provide informed guidance to the learning process without having to start from scratch. LfD has remained popular in the past within robotics, neuroscience, behavioural psychology and cognitive science, and has seen a resurgence recently in robotics, particularly with the advent of deep learning techniques. In this workshop, we plan to cover various techniques for LfD and invite a discussion into the possible future of LfD in the context of robotics, especially for solving long-time horizon tasks and tasks that require hierarchical decision making from multi-modal input (e.g. visual, haptic, language and auditory). We plan to invite well known researchers in machine learning, cognitive science and robotics with the aim to encourage collaboration and share new ideas across this multidisciplinary field. We will cover topics focusing on, but not limited to, inverse optimal control, inverse reinforcement learning, demonstrations within reinforcement learning, and LfD with function approximators e.g. neural networks.