A Novel Whisker Sensor Used for 3D Contact Point Determination and Contour Extraction
We developed a novel whisker-follicle sensor that measures three mechanical signals at the whisker base. The first two signals are closely related to the two bending moments, and the third is an approximation to the axial force. Previous simulation studies have shown that these three signals are sufficient to determine the three-dimensional (3D) location at which the whisker makes contact with an object. Here we demonstrate hardware implementation of 3D contact point determination and then use continuous sweeps of the whisker to show proof-of principle 3D contour extraction. We begin by using simulations to confirm the uniqueness of the mapping between the mechanical signals at the whisker base and the 3D contact point location for the specific dimensions of the hardware whisker. Multi-output random forest regression is then used to predict the contact point locations of objects based on observed mechanical signals. When calibrated to the simulated data, signals from the hardware whisker can correctly predict contact point locations to within 1.5 cm about 74% of the time. However, if normalized output voltages from the hardware whiskers are used to train the algorithm (without calibrating to simulation), predictions improve to within 1.5 cm for about 96% of contact points and to within 0.6 cm for about 78% of contact points. This improvement suggests that as long as three appropriate predictor signals are chosen, calibrating to simulations may not be required. The sensor was next used to perform contour extraction on a cylinder and a cone. We show that basic contour extraction can be obtained with just two sweeps of the sensor. With further sweeps, it is expected that full 3D shape reconstruction could be achieved.
[Full Paper]
Hannah Emnett, Matthew M. Graff, Mitra J. Z. Hartmann
A Real-time Augmented Reality Surgical System for Overlaying Stiffness Information
We describe a surgical system that autonomously searches for tumors and dynamically displays a computer graphic model of them super-imposed on the organ (or in our case, phantom). Once localized, the phantom is tracked in real time and augmented with overlaid stiffness information in 3D. We believe that such a system has the potential to quickly reveal the location and shape of tumors and the visual overlay will reduce the cognitive overload of the surgeon. The contribution of this paper is the integration of disparate technologies to achieve this system. In fact, to the best of our knowledge, our approach is one of the first to incorporate state-of-the-art methods in registration, force sensing and tumor localization into a unified surgical system. First, the preoperative model is registered to the intra-operative scene using a Bingham distribution-based filtering approach. An active level set estimation is then used to find the location and the shape of the tumors. We use a recently developed miniature force sensor to perform the palpation. The estimated stiffness map is then dynamically overlaid onto the registered preoperative model of the organ. We demonstrate the efficacy of our system by performing experiments on a phantom prostate model and other silicone organs with embedded stiff inclusions using the da Vinci research kit.
[Full Paper]
Nicolas Zevallos, Arun Srivatsan Rangaprasad, Hadi Salman, Lu Li, Jianing Qian, Saumya Saxena, Mengyun Xu, Kartik Patath, Howie Choset
A Real-Time Game Theoretic Planner for Autonomous Two-Player Drone Racing
To be successful in multi-player drone racing, a player must not only follow the race track in an optimal way, but also compete with other drones through strategic blocking, faking, and opportunistic passing while avoiding collisions. Since unveiling one's own strategy to the adversaries is not desirable, this requires each player to independently predict the other players' future actions. Nash equilibria are a powerful tool to model this and similar multi-agent coordination problems in which the absence of communication impedes full coordination between the agents. In this paper, we propose a novel receding horizon planning algorithm that, exploiting sensitivity analysis within an iterated best response computational scheme, can approximate Nash equilibria in real time. We demonstrate that our solution effectively competes against alternative strategies in a large number of drone racing simulations.
[Full Paper]
Riccardo Spica, Davide Falanga, Eric Cristofalo, Eduardo Montijano, Davide Scaramuzza, Mac Schwager
A Variable Stiffness Gripper with Antagonistic Magnetic Springs for Enhancing Manipulation
Object grasping with variable stiffness actuation not only improves the safety and robustness of the grasp but also can enhance dynamic manipulation. In this paper, we present the design aspects of a variable stiffness gripper and show how the controllable compliance of the fingers can improve the performance of dynamic manipulations such as hammering/hitting. The proposed gripper is consisted of two parallel fingers and repulsive magnets are used between the actuators and fingers as the nonlinear springs. Thus, by controlling the air-gaps between magnets, the position and force-stiffness characteristics of the fingers can be adjusted simultaneously. Finally, an optimal stiffness problem is solved to maximize the impact force in a hammering task through maximizing the kinetic energy of the grasped object at the hitting instance. Despite the simplicity of the design, experimental results demonstrate the effectiveness of the gripper for dynamic manipulation.
[Full Paper]
Amirhossein Memar, Ehsan Esfahani
Data-Driven Measurement Models for Active Localization in Sparse Environments
We develop an algorithm to explore an environment to generate a measurement model for use in future localization tasks. Ergodic exploration with respect to the likelihood of a particular class of measurement (e.g., a contact detection measurement in tactile sensing) enables construction of the measurement model. Exploration with respect to the information density based on the data-driven measurement model enables localization. We test the two-stage approach in simulations of tactile sensing, illustrating that the algorithm is capable of identifying and localizing objects based on sparsely distributed binary contacts. Comparisons with our method show that visiting low probability regions lead to acquisition of new information rather than increasing the likelihood of known information. Experiments with the Sphero SPRK robot validate the efficacy of this method for collision-based estimation and localization of the environment.
[Full Paper]
Ian Abraham, Anastasia Mavrommati, Todd Murphey
Adaptive Bias and Attitude Observer on the Special Orthogonal Group for True-North Gyrocompass Systems: Theory and Preliminary Results
This paper reports an adaptive sensor bias estimator and attitude observer operating directly on SO(3) for true-North gyrocompass systems that utilize six-degree of freedom inertial measurement units (IMUs) with three-axis accelerometers and three-axis gyroscopes (without magnetometers). Most present-day low-cost robotic vehicles employ attitude estimation systems that employ micro-electromechanical systems (MEMS) magnetometers, angular rate gyros, and accelerometers to estimate magnetic heading and attitude with limited heading accuracy. Present day MEMS gyros are not sensitive enough to dynamically detect Earth's rotation, and thus cannot be used to estimate true-North geodetic heading. In contrast, the reported gyrocompass system utilizes fiber optic gyroscope (FOG) IMU gyro and MEMS accelerometer measurements (without magnetometers) to dynamically estimate the instrument's time-varying attitude in real-time while the instrument is subject to a priori unknown rotations. Stability proofs, preliminary simulations, and a fullscale vehicle trial are reported that suggest the viability of the true-North gyrocompass system to provide dynamic real-time true-North heading, pitch, and roll while utilizing a comparatively low-cost FOG IMU.
[Full Paper]
Andrew Spielvogel, Louis Whitcomb
Agile Autonomous Driving using End-to-End Deep Imitation Learning
We present an end-to-end imitation learning system for agile, off-road autonomous driving using only low-cost on-board sensors. By imitating a model predictive controller equipped with advanced sensors, we train a deep neural network control policy to map raw, high-dimensional observations to continuous steering and throttle commands. Compared with recent approaches to similar tasks, our method requires neither state estimation nor on-the-fly planning to navigate the vehicle. Our approach relies on, and experimentally validates, recent imitation learning theory. Empirically, we show that policies trained with online imitation learning overcome well-known challenges related to covariate shift and generalize better than policies trained with batch imitation learning. Built on these insights, our autonomous driving system demonstrates successful high-speed off-road driving, matching the state-of-the-art performance.
[Full Paper]
Yunpeng Pan, Ching-An Cheng, Kamil Saigol, Keuntaek Lee, Xinyan Yan, Evangelos Theodorou, Byron Boots
Analytical Derivatives of Rigid Body Dynamics Algorithms
Rigid body dynamics is a well-established methodology in robotics. It can be exploited to exhibit the analytic form of kinematic and dynamic functions of the robot model. Two major algorithms, namely the recursive Newton-Euler algorithm (RNEA) and the articulated body algorithm (ABA), have been proposed so far to compute inverse dynamics and forward dynamics in a few microseconds. However, computing their derivatives remains a costly process, either using finite differences (costly and approximate) or automatic differentiation (difficult to implement and suboptimal). As computing the derivatives becomes an important issue (in optimal control, estimation, co-design or reinforcement learning), we propose in this paper new algorithms to efficiently compute them using closed-form formulations. We first explicitly differentiate RNEA, using the chain rule and adequate algebraic differentiation of spatial algebra. Then, using properties about the derivative of function composition, we show that the same algorithm can also be used to compute the derivatives of the direct dynamics with marginal additional cost. To this end, we finally introduce a new algorithm to compute the inverse of the joint-space inertia matrix, without explicitly computing the matrix itself. The algorithms have been implemented in an open-source C++ framework. The reported benchmarks, based on several robot models, display computational costs varying between 4 microseconds (for a 7-dof arm) up to 17 microseconds (for a 36-dof humanoid), i.e. outperforms state-of-the-art results. We also experimental show the importance of exact computations (w.r.t. finite differences) when exhibiting the sparsity of the resulting matrices.
[Full Paper]
Justin Carpentier
Asymmetric Actor Critic for Image-Based Robot Learning
Deep reinforcement learning (RL) has proven a powerful technique in many sequential decision making domains. However, robotics poses many challenges for RL, most notably training on a physical system can be expensive and dangerous, which has sparked significant interest in learning control policies using a physics simulator. While several recent works have shown promising results in transferring policies trained in simulation to the real world, they often do not fully utilize the advantage of working with a simulator. In this work, we propose the Asymmetric Actor Critic, which learns a vision-based control policy while taking advantage of access to the underlying state to significantly speed up training. Concretely, our algorithm employs an actor-critic training algorithm in which the critic is trained on full states while the actor (or policy) is trained on images. We show that using these asymmetric inputs improves performance on a range of simulated tasks. Finally, we combine this method with domain randomization and show real robot experiments for several tasks like picking, pushing, and moving a block. We achieve this simulation to real-world transfer without training on any real-world data. Videos of these experiments can be found in the supplementary material.
[Full Paper]
Lerrel Pinto, Marcin Andrychowicz, Peter Welinder, Wojciech Zaremba, Pieter Abbeel
Autonomous Adaptive Modification of Unstructured Environments
We present and validate a property-driven autonomous system that modifies its environment to achieve and maintain navigability over irregular 3-dimensional terrain. This capability is essential in systems that operate in unstructured outdoor or remote environments, either on their own or as part of a team. Our work focuses on using decision procedures in our building strategy that tie building actions to the function of the resulting structure, giving rise to adaptive and robust building behavior. Our approach is novel in its functionality in full 3d unstructured terrain, driven by continuous evaluations and reaction to terrain properties, rather than relying on a structure blueprint. We choose an experimental setup and building material that closely resemble real-world scenarios, and demonstrate effectiveness of our work using a low-cost robot system.
[Full Paper]
Maira Saboia da Silva, Vivek Thangavelu, Walker Gosrich, Nils Napp
Autonomous Thermalling as a Partially Observable Markov Decision Process
Small uninhabited aerial vehicles (sUAV) commonly rely on active propulsion to stay airborne, significantly reducing flight time and range. To address this, autonomous soaring seeks to utilize free atmospheric energy in the form of updrafts (thermals). However, their irregular nature at low altitudes makes them hard to exploit for existing methods. We model autonomous thermalling as a Bayesian reinforcement learning problem and present a controller that solves the resulting partially observable Markov decision process (POMDP) in a receding horizon fashion. We implement it as part of ArduPilot, a popular open-source autopilot, and compare it to an alternative in a series of live flight tests that involve two sUAV's thermalling simultaneously. Our BRL-based controller shows significant advantages in this field of study.
[Full Paper]
Iain Guilliard, Rick Rogahn, Jim Piavis, Andrey Kolobov
Bayesian Tactile Exploration for Compliant Docking With Uncertain Shapes
This paper presents a Bayesian approach to goal-based tactile exploration of planar shapes in the presence of both localization and shape uncertainty. The docking problem asks for the robot's end-effector to reach a stopping point of contact that resists a desired load. The proposed method repeatedly performs inference, planning, and execution steps. Given a prior probability distribution over object shape and sensor readings from previously executed motions, the posterior distribution is inferred using a novel and efficient Hamiltonian Monte Carlo method. The optimal docking site is chosen to maximize docking probability, using a closed-form probabilistic simulation that accepts rigid and compliant motion models under Coulomb friction. Numerical experiments demonstrate that this method requires fewer exploration actions to dock than heuristics and information-gain strategies.
[Full Paper]
Kris Hauser
Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach
This paper presents a real-time, object-independent grasp synthesis method which can be used for closed-loop grasping. Our proposed Generative Grasping Convolutional Neural Network (GG-CNN) predicts the quality and pose of grasps at every pixel. This one-to-one mapping from a depth image overcomes limitations of current deep-learning grasping techniques by avoiding discrete sampling of grasp candidates and long computation times. Additionally, our GG-CNN is orders of magnitude smaller while detecting stable grasps with equivalent performance to current state-of-the-art techniques. The light-weight and single-pass generative nature of our GG-CNN allows for closed-loop control at up to 50Hz, enabling accurate grasping in non-static environments where objects move and in the presence of robot control inaccuracies. In our real-world tests, we achieve an 83% grasp success rate on a set of previously unseen objects with adversarial geometry and 88% on a set of household objects that are moved during the grasp attempt. We also achieve 81% accuracy when grasping in dynamic clutter.
[Full Paper]
Douglas Morrison, Juxi Leitner, Peter Corke
Constant Factor Time Optimal Multi-Robot Routing on High-Dimensional Grids in Quadratic Time
Let G = (V, E) be an m_1 x ... x m_k grid for some arbitrary constant k. We establish that O(m_1 + ... + m_k) (makespan) time optimal labeled multi-robot path planning can be realized on G in O(|V|^2) running time, even when vertices of G are fully occupied by robots. When all dimensions are of equal sizes, the running time approaches O(|V|). Using this base line algorithm, which provides average case O(1)-approximate time optimal solutions, we further develop a first worst case O(1)-approximate algorithm that again runs in O(|V|^2) time for two and three dimensions. We note that the problem has a worst case running time lower bound of \Omega(|V|^2).
[Full Paper]
Jingjin Yu
Contact-Aided Invariant Extended Kalman Filter for Legged Robot State Estimation
This paper derives a contact-aided inertial navigation observer for a 3D bipedal robot using the theory of invariant observer design. Aided inertial navigation is fundamentally a nonlinear observer design problem and thus current solutions are based on approximations of the system dynamics, such as an Extended Kalman Filter (EKF), which uses a system's Jacobian linearization along the current best estimate of its trajectory. On the basis of the theory of invariant observer design by Barrau and Bonnabel, and in particular, the Invariant EKF (InEKF), we show that the error dynamics of the point contact-inertial system also follows a log-linear autonomous differential equation and, hence, the observable state variables can be rendered convergent with a domain of attraction that is independent of the system's trajectory. Due to the log-linear form of the error dynamics, it is not necessary to perform a nonlinear observability analysis to show that when using an Inertial Measurement Unit (IMU) and contact sensors, the absolute position of the robot and a rotation about the gravity vector (yaw) are unobservable. We further augment the state of the developed InEKF with IMU biases as the online estimation of these parameters has a crucial impact on system performance. We evaluate the convergence of the proposed system with that of the commonly used quaternion-based EKF observer using a Monte-Carlo simulation. In addition, our experimental evaluation using a Cassie-series bipedal robot shows that the contact-aided InEKF provides better performance in comparison with quaternion-based EKF as a result of exploiting symmetries present in the system dynamics.
[Full Paper]
Ross Hartley, Maani Ghaffari Jadidi, Jessy Grizzle, Ryan M Eustice
Contact-Implicit Optimization of Locomotion Trajectories for a Quadrupedal Microrobot
Planning locomotion strategies for legged microrobots is challenging due to their complex morphology, high frequency passive dynamics, and discontinuous contact interactions with the environment. Consequently, such research is often driven by time-consuming experimental tuning of controllers designed with simplified models. As an alternative, we present a framework for systematically modeling, planning, and controlling legged microrobots. We develop a three-dimensional dynamic model of a 1.43 g quadrupedal microrobot that has complexity (e.g., number of actuated degrees-of-freedom) similar to larger-scale legged robots. We then adapt a recent variational contact-implicit trajectory optimization method to generate feasible whole-body locomotion plans for this robot. We demonstrate that these plans can be tracked with simple joint-space controllers that are suitable for computationally constrained microrobots. Our method is used to plan periodic gaits at multiple stride frequencies and on various surfaces. These gaits achieve high per-cycle velocities, including a maximum of 10.87 mm/cycle, which is 33% faster than previously measured for this robot. Furthermore, we plan and execute a vertical jump of 9.9 6mm, which is 78% of the robot's body height. To the best of our knowledge, this is the first end-to-end demonstration of planning and tracking whole-body dynamic locomotion on an millimeter-scale legged robot.
[Full Paper]
Neel Doshi, Kaushik Jayaram, Benjamin Goldberg, Zachary Manchester, Robert Wood, Scott Kuindersma
Coordination of back bending and leg movements for quadrupedal locomotion
Many quadrupedal animals have lateral degrees of freedom in their backs that assist locomotion. This paper seeks to use a robotic model to demonstrate that back bending assists not only forward motion, but also lateral and turning motions. This paper uses geometric mechanics to prescribe gaits that coordinate both leg movements and back bending motion. Using these geometric tools, we show that back-bending can improve stride displacement in the forward, rotational, and lateral directions. In addition to locomotion performance improvement, the back bending can also expand the target position space a robot can reach within one gait cycle. Our results are verified by conducting experiments with a robot moving on granular materials.
[Full Paper]
Baxi Chong, Yasemin Ozkan Aydin, Chaohui Gong, Guillaume Sartoretti, Yunjin Wu, Jennifer Rieser, Haosen Xing, Jeffery Rankin, Krijn Michel, Alfredo Nicieza, John Hutchinson, Daniel Goldman, Howie Choset
Creating Foldable Polyhedral Nets
Recent innovations enable robots to be manufac- tured using low-cost planar active material and self-folded into 3D structures. Algorithmically creating a 2D crease pattern to realize the desired 3D structure involves two steps: (1) generating unfoldings (i.e., polyhedral net), and (2) creating collision-free folding motions that take polyhedral net back to the 3D shape. It is the current practice that net design and folding motion are decoupled and treated as two independent steps. This creates a major challenge in creating self-folding machines because, given a polyhedron P, the foldability of P’s nets can vary significantly. Certain nets may not be foldable even using the most sophisticated folding motion planners. This paper presents a novel learning strategy to generate foldable nets with optimized foldability. Direct evaluation on the foldability of a net is nontrivial and can be computationally expensive. Our theoretical contribution shows that it is possible to approximate foldability combining the geometric and topological properties of a net. The experimental results show that our new unfolder will not only generate valid unfoldings but also ones that are easy to fold. Consequently, our approach makes folding much simpler in designing self-folding machines.
[Full Paper]
Yue hao, Yun-hyeong Kim, Zhonghua Xi, Jyh-Ming Lien
Differentiable Particle Filters: End-to-End Learning with Algorithmic Priors
We present differentiable particle filters (DPFs): a differentiable implementation of the particle filter algorithm with learnable motion and measurement models. Since DPFs are end-to-end differentiable, we can efficiently train their models by optimizing the right objective---end-to-end state estimation performance---rather than a proxy objective such as accuracy of the individual models. Compared to generic differentiable architectures such as long short-term memory networks (LSTMs), differentiable particle filters encode the structure of recursive state estimation with prediction and measurement update that operate on a probability distribution over states. This encoded structure represents an algorithmic prior that ensures explainability and improves performance in state estimation problems. Our experiments on simulated and real data show substantial benefits from end-to-end learning and algorithmic priors, e.g. reducing error rates by ~80%. Our experiments also show that unlike LSTMs, DPFs learn localization in a policy-agnostic way.
[Full Paper]
Rico Jonschkowski, Divyam Rastogi, Oliver Brock
Differentiable Physics and Stable Modes for Tool-Use and Manipulation Planning
We consider the problem of sequential manipulation and tool-use planning in domains that include physical interactions such as hitting and throwing. The approach integrates a Task And Motion Planning formulation with primitives that either impose stable kinematic constraints or differentiable dynamical and impulse exchange constraints on the path optimization level. We demonstrate our approach on a variety of physical puzzles that involve tool use and dynamic interactions. We also collected data of humans solving analogous trials, helping us to discuss prospects and limitations of the proposed approach.
[Full Paper]
Marc Toussaint, Kelsey Allen, Kevin Smith, Joshua Tenenbaum
Directionally Controlled Time-of-Flight Ranging for Mobile Sensing Platforms
Scanning time-of-flight (TOF) sensors obtain depth measurements by directing modulated light beams across a scene. We demonstrate that control of the directional scanning patterns can enable novel algorithms and applications. Our analysis occurs entirely in the angular domain and consists of two ideas. First, we show how to exploit the angular support of the light beam to improve reconstruction results. Second, we describe how to control the light beam direction in a way that maximizes a well-known information theoretic measure. Using these two ideas, we demonstrate novel applications such as adaptive TOF sensing, LIDAR zoom, LIDAR edge sensing for gradient-based reconstruction and energy efficient LIDAR scanning. Our contributions can apply equally to sensors using mechanical, optoelectronic or MEMS-based approaches to modulate the light beam, and we show results here on a MEMS mirror-based LIDAR system. In short, we describe new adaptive directionally controlled TOF sensing algorithms which can impact mobile sensing platforms such as robots, wearable devices and IoT nodes.
[Full Paper]
Zaid Tasneem, Dingkang Wang, Huikai Xie, Koppal Sanjeev
Dual-Speed MR Safe Pneumatic Stepper Motors
In breast cancer it is essential to perform precise interventions to determine the diagnosis. Robotic systems actuated by MR safe pneumatic stepper motors under near-realtime MRI guidance improve accuracy to target the tumour. As current systems are controlled by electromagnetic valves outside the Faraday's cage of the MRI scanner and long pneumatic tubes limit the stepping frequency, the achievable accuracy or speed is limited. Two dual-speed stepper motors were designed in order to solve this limitation. The linear motor measures 50x32x14 mm (excluding racks) and has step sizes 1.7 mm and 0.3 mm. The maximum combined speed under load is 20 mm/s, measured force is 24 N and positioning accuracy is 0.1 mm. The rotational motor measures Ø 30x32 mm (excluding axles) and has step sizes 10° and 12.9°. Under load, its maximum angular speed is 229°/s or 38.2 RPM, maximum torque is 74 N mm and positioning accuracy is 1°. By operating the valves in a coordinated way, both high-speed and precise position control can be achieved. With these specifications, the motors are significantly better suitable to actuate MR safe surgical robots than state-of-the-art motors.
[Full Paper]
Vincent Groenhuis, Françoise Siepel, Stefano Stramigioli
Effective Plug-and-Play Supervisory Control Using Muscle and Brain Signals for Real-Time Gesture and Error Detection
Control of robots in safety-critical tasks, particularly for situations when costly errors may occur, is paramount for realizing the vision of pervasive human-robot collaborations. For these cases, the ability to use human cognition in the loop can be key for recuperating safe robot operation. This paper combines two streams of human biosignals, EEG and EMG, to achieve fast and accurate human intervention in a supervisory control task. In particular, this paper presents an end-to-end system for classification of ErrP signals (produced in the human supervisor's brain when he/she observes an error being committed by the robot), a continuous rolling-window classifier for EMG signals that allows the human to actively correct the robot operation on demand, and a framework for integrating these two streams for fast and effective human intervention. Moreover, the system allows for ``plug-and-play'' operation, demonstrating accurate performance even with new users whose biosignals had not been used for training the classifiers. The resulting hybrid control system for safety-critical HRI tasks is evaluated in a target selection task with 7 untrained human subjects.
[Full Paper]
Joseph DelPreto, Andres F. Salazar-Gomez, Stephanie Gil, Ramin M. Hasani, Frank H. Guenther, Daniela Rus
Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments
Accurate and reliable localization and mapping is a fundamental building block for most autonomous robots. For this purpose, we propose a novel, dense approach to laser-based mapping that operates on three-dimensional point clouds obtained from typical rotating laser scanners. We construct a surfel-based map representation and estimate the changes in the pose of the robot by exploiting projective data association between the current scan and a rendered model view from that surfel map. For detection and verification of a loop closure, we leverage the map representation to compose a virtual view of the map before a potential loop closure, which enables a more robust detection even with low overlap between scan and already mapped areas. Our approach is highly efficient and allows for real-time capable registration, while at the same time detecting loop closures and updating the map using the optimized poses in an online fashion. Our experimental evaluation on the KITTI Vision Benchmark shows that our approach can efficiently estimate globally consistent maps in large scale environments using only point cloud data.
[Full Paper]
Jens Behley, Cyrill Stachniss
Efficiently Sampling from Underlying Models
The capability and mobility of exploration robots is increasing rapidly, yet missions will always be constrained by one main resource: time. Time limits the number of samples a robot can collect, sites it can analyze, and the availability of human oversight, so it is imperative the robot is able to make intelligent actions when it comes to choosing when, where, and what to sample, a process known as adaptive sampling. This work advances the state of the art in adaptive sampling for exploration robotics. We take advantage of the fact that rover operations are typically not performed in a vacuum; extensive contextual data is often present, most often in the form of orbital imagery, rover navigation images, and prior instrument measurements. Using this context, we apply Bayesian and nonparametric models to decide where best to sample under a limited budget, using real X-ray lithochemistry data. We find that our methods improve both the diversity of samples collected as well as select samples that are representative of the dataset. We find that model-based approaches made scalable with Dirichlet processes improve sampling results when the underlying number classes and class distribution is unknown. Unlike previous works, our approaches reduce the impact of noise on sampling location, a common problem when selecting samples based on noisy or incomplete contextual data.
[Full Paper]
Greydon Foil, David Wettergreen
Embedded High Precision Control and Corn Stand Counting Algorithms for an Ultra-Compact 3D Printed Field Robot
This paper presents embedded high precision control and corn stands counting algorithms for a low-cost, ultra-compact 3D printed and autonomous field robot for agricultural operations. Currently, plant traits, such as emergence rate, biomass, vigor and stand counting are measured manually. This is highly labor intensive and prone to errors. The robot, termed TerraSentia, is designed to automate the measurement of plant traits for efficient phenotyping as an alternative to manual measurements. In this paper, we formulate a nonlinear moving horizon estimator that identifies key terrain parameters using onboard robot sensors and a learning-based nonlinear model predictive control (NMPC) that ensures high precision path tracking in the presence of unknown wheel-terrain interaction. Moreover, we develop a machine vision algorithm to enable TerraSentia to count corn stands by driving through the fields autonomously. We present results of an extensive field-test study that shows that i) the robot can track paths precisely with less than 5cm error so that the robot is less likely to damage plants, and ii) the machine vision algorithm is robust against interferences from leaves and weeds, and the system has been verified in corn fields at the growth stage of V4, V6, VT, R2, and R6 from five different locations. The robot predictions agree well with the ground truth with the correlation coefficient R=0.96.
[Full Paper]
Erkan Kayacan, Zhongzhong Zhang, Girish Chowdhary
EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras
Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand crafted algorithms. Deep learning has shown great success in providing model free solutions to many problems in the vision community, but existing networks have been developed with frame based images in mind, and there does not exist the wealth of labeled data for events as there does for images for supervised training. To these points, we present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras. In particular, we introduce an image based representation of a given event stream, which is fed into a self-supervised neural network as the sole input. The corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time, given the estimated flow from the network. We show that the resulting network is able to accurately predict optical flow from events only in a variety of different scenes, with performance competitive to image based networks. This method not only allows for accurate estimation of dense optical flow, but also provides a framework for the transfer of other self-supervised methods to the event-based domain.
[Full Paper]
Alex Zhu, Liangzhe Yuan, Kenneth Chaney, Kostas Daniilidis
Exploiting Stochasticity for Navigation in Gyre Flows
We present a control strategy to control the inter-gyre switching time of an agent operating in a gyre flow. The proposed control strategy exploits the stochasticity of the underlying environment to affect inter-gyre transitions. We show how control can be used to enhance or abate the mean escape time and present a strategy to achieve a desired mean escape time. We show that the proposed control strategy can achieve any desired escape time in an interval governed by the maximum available control. We demonstrate the effectiveness of the strategy in simulations.
[Full Paper]
Dhanushka Kularatne, M. Ani Hsieh, Eric Forgoston
Fast Online Trajectory Optimization for the Bipedal Robot Cassie
We apply fast online trajectory optimization for multi-step motion planning to Cassie, a bipedal robot designed to exploit natural spring-mass locomotion dynamics using lightweight, compliant legs. Our motion planning formulation simultaneously optimizes over center of mass motion, footholds, and center of pressure for a simplified model that combines transverse linear inverted pendulum and vertical spring dynamics. A vertex-based representation of the support area combined with this simplified dynamic model that allows closed form integration leads to a fast nonlinear programming problem formulation. This optimization problem is continuously solved online in a model predictive control approach. The output of the reduced-order planner is fed into a quadratic programming based operational space controller for execution on the full-order system. We present simulation results showing the performance and robustness to disturbances of the planning and control framework. Preliminary results on the physical robot show functionality of the operational space control system, with integration of the trajectory planner a work in progress.
[Full Paper]
Taylor Apgar, Patrick Clary, Kevin Green, Alan Fern, Jonathan Hurst
FlashFusion: Real-time Globally Consistent Dense 3D Reconstruction using CPU Computing
Aiming at the practical usage of dense 3D reconstruction on portable devices, we propose FlashFusion, a Fast LArge-Scale High-resolution (sub-centimeter level) 3D reconstruction system without the use of GPU computing. It enables globally-consistent localization through a robust yet fast global bundle adjustment scheme, and realizes spatial hashing based volumetric fusion running at 300Hz and rendering at 25Hz via highly efficient valid chunk selection and mesh extraction schemes. Extensive experiments on both real world and synthetic datasets demonstrate that FlashFusion succeeds to enable real-time, globally consistent, high-resolution (5mm), and large-scale dense 3D reconstruction using highly-constrained computation, i.e., the CPU computing on portable devices.
[Full Paper]
Lei Han, Lu Fang
Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation Learning
We introduce a model and algorithm for following high-level navigation instructions by maping directly from images, instructions and pose estimates to continuous low-level velocity commands for real-time control. The Grounded Semantic Mapping Network (GSMN) is a fully-differentiable neural network architecture that includes modular and interpretable perception, grounding, mapping and planning modules. It builds an explicit semantic map in the world reference frame. The information stored in the map is learned from experience, while the local-to-world transformation used for gridcell lookup is computed explicitly within the network. We train the model using a modified variant of DAgger optimized for speed and memory. We test GSMN in rich virtual environments on a realistic quad-copter simulator powered by Microsoft AirSim and show that our model outperforms strong neural baselines and almost reaches the performance of its teacher expert policy. Its success is attributed to the spatial transformation and mapping modules which also provide highly interpretable maps that reveal the reasoning of the model.
[Full Paper]
Valts Blukis, Nataly Brukhim, Andrew Bennett, Ross Knepper, Yoav Artzi
Full-Frame Scene Coordinate Regression for Image-Based Localization
Image-based localization, or camera relocalization, is a fundamental problem in computer vision and robotics, and it refers to estimating camera pose from an image. Recent state-of- the-art approaches use learning based methods, such as Random Forests (RFs) and Convolutional Neural Networks (CNNs), to regress for each pixel in the image its corresponding position in the scene’s world coordinate frame, and solve the final pose via a RANSAC-based optimization scheme using the predicted correspondences. In this paper, instead of in a patch-based manner, we propose to perform the scene coordinate regression in a full-frame manner to make the computation efficient at test time and, more importantly, to add more global context to the regression process to improve the robustness. To do so, we adopt a fully convolutional encoder-decoder neural network architecture which accepts a whole image as input and produces scene coordinate predictions for all pixels in the image. However, using more global context is prone to overfitting. To alleviate this issue, we propose to use data augmentation to generate more data for training. In addition to the data augmentation in 2D image space, we also augment the data in 3D space. We evaluate our approach on the publicly available 7-Scenes dataset and experiments show that it has better scene coordinate predictions, and achieves state-of-the-art results in localization with improved robustness on the hardest frames (e.g., frames with repeated structures).
[Full Paper]
Xiaotian Li, Juha Ylioinas, Juho Kannala
Generalized WarpDriver: Unified Collision Avoidance for Multi-Robot Systems in Arbitrarily Complex Environments
In this paper we present a unified collision-avoidance algorithm for the navigation of arbitrary agents, from pedestrians to various types of robots, including vehicles. This approach significantly extends the WarpDriver algorithm specialized for disc-like agents (e.g. crowds) to a wide array of robots in the following ways: (1) the new algorithm is more robust by unifiying the original set of Warp Operators for different non-linear extrapolations of motion into a single, general operator; (2) the algorithm is generalized to support agent dynamics and additional shapes beyond just circles; and (3) with addition of few, simple soft constraints, the algorithm can be used to simulate vehicle traffic. Thanks to the generality of the unified algorithm without special case handling, the new capabilities are tighly integrated at the level of collision avoidance, rather than as added layers of multiple heuristics on top of various collision-avoidance schemes designed independently for pedestrians vs. different types of robots and vehicles.
[Full Paper]
David Wolinski, Ming Lin
Geometry-aware Tracking of Manipulability Ellipsoids
Body posture can greatly influence human performance when carrying out manipulation tasks. Adopting an appropriate pose helps us regulate our motion and strengthen our capability to achieve a given task. This effect is also observed in robotic manipulation where the robot joint configuration affects not only the ability to move freely in all directions in the workspace, but also the capability to generate forces along different axes. In this context, manipulability ellipsoids arise as a powerful descriptor to analyze, control and design the robot dexterity as a function of the articulatory joint configuration. This paper presents a new tracking control scheme in which the robot is requested to follow a desired profile of manipulability ellipsoids, either as its main task or as a secondary objective. The proposed formulation exploits tensor-based representations and takes into account that manipulability ellipsoids lie on the manifold of symmetric positive definite matrices. The proposed mathematical development is compatible with statistical methods providing 4th-order covariances, which are here exploited to reflect the tracking precision required by the task. Extensive evaluations in simulation and two experiments with a real redundant manipulator validate the feasibility of the approach, and show that this control formulation outperforms previously proposed approaches.
[Full Paper]
Noémie Jaquier, Leonel Rozo, Sylvain Calinon
GPU-Based Max Flow Maps in the Plane
One main challenge in multi-agent navigation is to generate trajectories minimizing bottlenecks in generic polygonal environments with many obstacles. In this paper we approach this problem globally by taking into account the maximum flow capacity of a given polygonal environment. Given the difficulty in solving the continuous maximum flow of a planar environment, we introduce in this paper a new GPU method which is able to compute maximum flow maps in arbitrary two-dimensional polygonal domains. Once the flow is computed, we then propose a method to extract lane trajectories according to the size of the agents and to optimize the trajectories in length while keeping constant the maximum flow achieved by the system of trajectories. As a result we are able to generate trajectories of maximum flow from source to destination edges across a generic set of polygonal obstacles, enabling the deployment of large numbers of agents optimally with respect to the maximum flow capacity of the environment. Our overall method guarantees that no bottlenecks are formed. Our system produces trajectories which are globally-optimal with respect to the flow capacity and locally-optimal with respect to the total length of the system of trajectories.
[Full Paper]
Renato Farias, Marcelo Kallmann
Handling implicit and explicit constraints in manipulation planning
This paper deals with manipulation planning. The problem consists in automatically computing paths for a system composed of one or several robots, with one or several grippers and one or several objects that can be grasped and moved by the robots. The problem gives rise to constraints that can be explicit -- an object is in a gripper -- or implicit -- an object is hold by two different grippers. This paper proposes an algorithm that handles such sets of constraints and solves them in an explicit way as much as possible. When all constraints cannot be made explicit, substitution is operated between variables to make the resulting implicit constraint with as few variables as possible. The manipulation planning problem is modelled as a constraint graph that stores all the constraints of the problem.
[Full Paper]
Florent Lamiraux, Joseph Mirabel
HyP-DESPOT: A Hybrid Parallel Algorithm for Online Planning under Uncertainty
Planning under uncertainty is critical for robust robot performance in uncertain, dynamic environments, but it incurs high computational cost. State-of-the-art online search algorithms, such as DESPOT, have vastly improved the computational efficiency of planning under uncertainty and made it a valuable tool for robotics in practice. This work takes one step further by leveraging both CPU and GPU parallelization in order to achieve near real-time online planning performance for complex tasks with large state, action, and observation spaces. Specifically, we propose Hybrid Parallel DESPOT (HyP-DESPOT), a massively parallel online planning algorithm that integrates CPU and GPU parallelism in a multi-level scheme. It performs parallel DESPOT tree search by simultaneously traversing multiple independent paths using multi-core CPUs and performs parallel Monte-Carlo simulations at the leaf nodes of the search tree using GPUs. Experimental results show that HyP-DESPOT speeds up online planning by up to hundreds of times, compared with the original DESPOT, in several challenging robotic tasks in simulation.
[Full Paper]
Panpan Cai, Yuanfu Luo, David Hsu, Wee Sun Lee
Improving Multi-Robot Behavior Using Learning-Based Receding Horizon Task Allocation
Planning efficient and coordinated policies for a team of robots is a computationally demanding problem, especially when the system faces uncertainty in the outcome or duration of actions. In practice, approximation methods are usually employed to plan reasonable team policies in an acceptable time. At the same time, many typical robotic tasks include a repetitive pattern. On the one hand, this multiplies the increased cost of inefficient solutions. But on the other hand, it also provides the potential for improving an initial, inefficient solution over time. In this paper, we consider the case that a single mission specification is given to a multi-robot system, describing repetitive tasks which allow the robots to parallelize work. We propose here a decentralized coordination scheme which enables the robots to decompose the full specification, execute distributed tasks, and improve their strategy over time.
[Full Paper]
Philipp Schillinger, Mathias Buerger, Dimos Dimarogonas
In-Hand Manipulation via Motion Cones
In this paper we present the mechanics and algorithms to compute the set of feasible motions of an object pushed in a plane. This set is known as the motion cone and was previously described for non-prehensile manipulation tasks in the horizontal plane. We generalize its geometric construction to a broader set of planar tasks, where external forces such as gravity influence the dynamics of pushing, and prehensile tasks, where there are complex interactions between the gripper, object, and pusher. We show that the motion cone is defined by a set of low-curvature surfaces and provide a polyhedral cone approximation to it. We verify its validity with 2000 pushing experiments recorded with motion tracking system. Motion cones abstract the algebra involved in simulating frictional pushing by providing bounds on the set of feasible motions and by characterizing which pushes will stick or slip. We demonstrate their use for the dynamic propagation step in a sampling-based planning algorithm for in-hand manipulation. The planner generates trajectories that involve sequences of continuous pushes with 5-1000x speed improvements to equivalent algorithms.
[Full Paper]
Nikhil Chavan Dafle, Rachel Holladay, Alberto Rodriguez
Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction
This paper presents INGRESS, a robot system that follows human natural language instructions to pick and place everyday objects. The core issue here is the grounding of referring expressions: infer objects and their relationships from input images and language expressions. INGRESS allows for unconstrained object categories and unconstrained language expressions. Further, it asks questions to disambiguate referring expressions interactively. To achieve these, we take the approach of grounding by generation and propose a two-stage neural-network model for grounding. The first stage uses a neural network to generate visual descriptions of objects, compares them with the input language expression, and identifies a set of candidate objects. The second stage uses another neural network to examine all pairwise relations between the candidates and infers the most likely referred object. The same neural networks are used for both grounding and question generation for disambiguation. Experiments show that INGRESS outperformed a state-of-the-art method on the RefCOCO benchmark dataset and in robot experiments with humans.
[Full Paper]
Mohit Shridhar, David Hsu
Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Dexterous multi-fingered hands are extremely versatile and provide a generic way to perform multiple tasks in human-centric environments. However, effectively controlling them remains challenging due to their high dimensionality and large number of potential contacts. Deep reinforcement learning (DRL) provides a model-agnostic approach to control complex dynamical systems, but has not been shown to scale to high-dimensional dexterous manipulation. Furthermore, deployment of DRL on physical systems remains challenging due to sample inefficiency. The success of DRL in robotics has thus far been limited to simpler manipulators and tasks. In this work, we show that model-free DRL methods can effectively scale up to complex manipulation tasks with a high-dimensional 24-DoF hand, and solve them from scratch in simulated experiments. Furthermore, with the use of a small number of human demonstrations, the sample complexity can be significantly reduced which enable learning in simulation with sample complexity equivalent to a few hours of robot experience. Incorporation of demons infuse human priors resulting in robust policies with elegant human like movements. We demonstrate successful policies for multiple complex tasks: object relocation, in-hand manipulation, tool use, and door opening.
[Full Paper]
Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, Sergey Levine
Learning Task-Oriented Grasping for Tool Manipulation with Simulated Self-Supervision
Task-oriented tool use is a vital skill for achieving generalizable robot autonomy in manipulation. Tool use involves leveraging an object, not a part of the robot, to facilitate completing a task objective. This requires reasoning about the effect needed for the task, identifying a suitable tool, selecting a proper grasp for the tool, and manipulating the tool to achieve this effect. Task-agnostic grasping optimizes to satisfy the stability constraints while ignoring task-specific constraints that are crucial for task completion. We present TGS^3, a jointly optimized algorithm to learn task-oriented grasping for tool-based manipulation. We propose a framework of generating large-scale simulated self supervision with procedurally generated tools. Our model can operate only on noisy depth images without prior knowledge of object geometry. We demonstrate TGS^3 on two tasks, hammering and sweeping. In comparison to using task-agnostic grasps, we show that our model improves task success by 59% for hammering and 80% for pushing in simulated evaluation with novel objects. We also show generalization to a physical robot on the two tasks evaluated using 12 unseen tools. Over 60 task trials, we observe a success rate of 51% for hammering and 45% over 60 task trials for sweeping. Supplementary material is available at: http://bit.ly/task-grasping.
[Full Paper]
Kuan Fang, Yuke Zhu, Animesh Garg, Andrey Kuryenkov, Viraj Mehta, Li Fei-Fei, Silvio Savarese
Lightweight Unsupervised Deep Loop Closure
Robust efficient loop closure detection is essential for large-scale real-time SLAM. In this paper, we propose a novel unsupervised deep neural network architecture of a feature embedding for visual loop closure (or place recognition). Our model is built upon the autoencoder architecture, tailored specifically to the problem at hand. To train our network, we inflict random noise on our input data as the denoising autoencoder does, but, instead of directly altering pixel values, we apply randomized projective transformations to whole images to emulate natural viewpoint changes due to robot motion. Moreover, we utilize the value of histogram of oriented gradients (HOG) for place recognition, forcing the encoder to reconstruct a HOG descriptor instead of the traditional flattened image or image patches. As a result, our trained model extracts features robust to extreme variations in appearance directly from raw images, without the need for labeled training data or environment-specific training. We perform extensive experiments on a variety of challenging datasets, showing that the proposed deep loop-closure model consistently outperforms the state-of-the-art methods, in terms of effectiveness and efficiency.
[Full Paper]
Nathaniel Merrill, Guoquan Huang
LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics
Human visual scene understanding is so remarkable that we are able to recognize a revisited place when entering it from the opposite direction it was first visited, even in the presence of extreme variations in appearance. This capability is especially apparent during driving: a human driver can recognize where they are when travelling in the reverse direction along a route for the first time, without having to turn back and look. The difficulty of this problem exceeds any addressed in past appearance- and viewpoint-invariant visual place recognition (VPR) research, in part because large parts of the scene are not commonly observable from opposite directions. Consequently, as shown in this paper, the precision-recall performance of current state-of-the-art viewpoint- and appearance-invariant VPR techniques is orders of magnitude below what would be usable in a closed-loop system. Current engineered solutions predominantly rely on panoramic camera or LIDAR sensing setups; an eminently suitable engineering solution but one that is clearly very different to how humans navigate, which also has implications for how naturally humans could interact and communicate with the navigation system. In this paper, we develop a suite of novel semantic- and appearance-based techniques to enable for the first time high-performance place recognition in this challenging scenario. We first propose a novel Local Semantic Tensor (LoST) descriptor of images using the convolutional feature maps from a state-of-the-art dense semantic segmentation network. Then, to verify the spatial semantic arrangement of the top matching candidates, we develop a novel approach for mining semantically-salient keypoint correspondences. On publicly available benchmark datasets that involve both 180-degree viewpoint change and extreme appearance change, we show how meaningful recall at 100% precision can be achieved using our proposed system where existing systems often fail to ever reach 100% precision. We also present analysis delving into the performance differences between a current and the proposed system and characterize unique properties of the opposite direction localization problem including the metric matching offset.
[Full Paper]
Sourav Garg, Niko Suenderhauf, Michael Milford
Multi-Objective Analysis of Ridesharing in Automated Mobility-on-Demand
Self-driving technology is expected to enable the realization of large-scale mobility-on-demand systems that employ massive ridesharing. The technology is being celebrated as a potential cure for urban congestion and others negative externalities of individual automobile transportation. In this paper we quantify the potential of ridesharing with a fleet of autonomous vehicles by considering all possible trade-offs between the quality of service and operation cost of the system that can be achieved by sharing rides. We formulate a multi-objective fleet routing problem and present a solution technique that can compute Pareto-optimal fleet operation plans that achieve different trade-offs between the two objectives. Given a set of requests and a set of vehicles, our method can recover a trade-off curve that quantifies the potential of ridesharing with given fleet. We provide a formal optimality proof and demonstrate that the proposed method is scalable and able to compute such trade-off curves for instances with hundreds of vehicles and requests optimally. Such an analytical tool helps with systematic design of shared mobility system, in particular, it can be used to do principled decisions about the required fleet size.
[Full Paper]
Michal Cap, Javier Alonso-Mora
Near-Optimal Budgeted Data Exchange for Distributed Loop Closure Detection
Inter-robot loop closure detection is a core problem in collaborative SLAM (CSLAM). Establishing inter-robot loop closures is a resource-demanding process, during which robots must consume a substantial amount of mission-critical resources (e.g., battery and bandwidth) to exchange sensory data. However, even with the most resource-efficient techniques, the resources available onboard may be insufficient for verifying every potential loop closure. This work addresses this critical challenge by proposing a resource-adaptive framework for distributed loop closure detection. We seek to maximize task-oriented objectives subject to a budget constraint on total data transmission. This problem is in general NP-hard. We approach this problem from different perspectives and leverage existing results on monotone submodular maximization to provide efficient approximation algorithms with performance guarantees. The proposed approach is extensively evaluated using the KITTI odometry benchmark dataset and synthetic Manhattan-like datasets.
[Full Paper]
Yulun Tian, Kasra Khosoussi, Matthew Giamou, Jonathan How, Jonathan Kelly
On the interaction between Autonomous Mobility-on-Demand systems and the power network: models and coordination algorithms
This paper studies the interaction between a fleet of electric, self-driving vehicles servicing on-demand transportation requests (referred to as Autonomous Mobility-on-Demand, or AMoD, system) and the electric power network. We propose a joint linear model that captures the coupling between the two systems stemming from the vehicles’ charging requirements. The model subsumes existing network flow models for AMoD systems and DC models for the power network, and it captures time-varying customer demand and power generation costs, road congestion, and power transmission and distribution constraints. We then leverage the model to jointly optimize the operation of both systems. We devise an algorithmic procedure to losslessly reduce the problem size by bundling customer requests, allowing it to be efficiently solved by off-the-shelf linear programming solvers. We then study the implementation of a hypothetical electric-powered AMoD system in Dallas-Fort Worth, and its impact on the Texas power network. We show that coordination between the AMoD system and the power network can reduce the overall energy expenditure compared to the case where no cars are present (despite the increased demand for electricity) and yield savings of $182M/year compared to an uncoordinated scenario. Finally, we provide a closed-loop receding-horizon implementation. Collectively, the results of this paper provide a first-of-a-kind characterization of the interaction between electric-powered AMoD systems and the power network, and shed additional light on the economic and societal value of AMoD.
[Full Paper]
Federico Rossi, Ramon Iglesias, Mahnoosh Alizadeh, Marco Pavone
One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning
Humans and animals are capable of learning a new behavior by observing others perform the skill just once. We consider the problem of allowing a robot to do the same -- learning from a raw video pixels of a human, even when there is substantial domain shift in the perspective, environment, and embodiment between the robot and the observed human. Prior approaches to this problem have hand-specified how human and robot actions correspond and often relied on explicit human pose detection systems. In this work, we present an approach for one-shot learning from a video of a human by using human and robot demonstration data from a variety of previous tasks to build up prior knowledge through meta-learning. Then, combining this prior knowledge and only a single video demonstration from a human, the robot can perform the task that the human demonstrated. We show experiments on both a PR2 arm and a Sawyer arm, demonstrating that after meta-learning, the robot can learn to place, push, and pick-and-place new objects using just one video of a human performing the manipulation.
[Full Paper]
Tianhe Yu, Chelsea Finn, Sudeep Dasari, Annie Xie, Tianhao Zhang, Pieter Abbeel, Sergey Levine
Online User Assessment for Minimal Intervention During Task-Based Robotic Assistance
We propose a novel criterion for evaluating user input for human-robot interfaces for known tasks. We use the mode insertion gradient (MIG)---a tool from hybrid control theory---as a filtering criterion that instantaneously assesses the impact of user actions on a dynamic system over a time window into the future. As a result, the filter is permissive to many chosen strategies, minimally engaging, and skill-sensitive---qualities desired when evaluating human actions. Through a human study with 28 healthy volunteers, we show that the criterion exhibits a low, but significant, negative correlation (r=-0.24) between skill level, as estimated from task-specific measures in unassisted trials, and the rate of controller intervention during assistance. Moreover, a MIG-based filter can be utilized to create a shared control scheme for training or assistance. In the human study, we observe a substantial training effect when using a MIG-based filter to perform cart-pendulum inversion, particularly when comparing improvement via the RMS error measure. Using simulation of a controlled spring-loaded inverted pendulum (SLIP) as a test case, we extend our results to show that the MIG criterion could be used for assistance to guarantee either task completion or safety of a joint human-robot system, while maintaining its flexibility with respect to a user-chosen strategy.
[Full Paper]
Aleksandra Kalinowska, Kathleen Fitzsimons, Julius Dewald, Todd Murphey
Optimal Solution of the Generalized Dubins Interval Problem
The problem addressed in this paper is motivated by surveillance mission planning with curvature-constrained trajectories for Dubins vehicles that can be formulated as the Dubins Traveling Salesman Problem with Neighborhoods (DTSPN). We aim to provide a tight lower bound of the DTSPN, especially for the cases where the sequence of visits to the given regions is available. A problem to find the shortest Dubins path connecting two regions with prescribed intervals for possible departure and arrival heading angles of the vehicle is introduced. This new problem is called the Generalized Dubins Interval Problem (GDIP) and its optimal solution is addressed. Based on the solution of the GDIP, a tight lower bound of the above mentioned DTSPN is provided which is further utilized in a sampling-based solution of the DTSPN to determine a feasible solution that is close to the optimum.
[Full Paper]
Petr Váňa, Jan Faigl
Passive Static Equilibrium with Frictional Contacts and Application to Grasp Stability Analysis
This paper studies the problem of passive grasp stability under an external disturbance, or the ability of a grasp to resist a disturbance based on passive responses at the contacts. To obtain physically consistent results, such a model must account for friction phenomena at each contact, made more difficult by the fact that friction forces depend in non-linear fashion on contact behavior (stick or slip). We introduce the first polynomial-time algorithm that can solve such complex equilibrium constraints for two-dimensional grasps, or guarantee that no solution exists. Our algorithm captures passive response behaviors at each contact, while accounting for constraints that govern friction forces such as the maximum dissipation principle.
[Full Paper]
Maximilian Haas-Heger, Matei Ciocarlie, Christos Papadimitriou, Mihalis Yannakakis, Garud Iyengar
PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes
Estimating the 6D pose of known objects is important for robots to interact with objects in the real world. The problem is challenging due to the variety of objects as well as the complexity of the scene caused by clutter and occlusion between objects. In this work, we introduce a new Convolutional Neural Network (CNN) for 6D object pose estimation named PoseCNN. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. PoseCNN is able to handle symmetric objects and is also robust to occlusion between objects. In addition, we contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. Our dataset provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. We conduct experiments on our YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN provides very good estimates using only color as input.
[Full Paper]
Yu Xiang, Tanner Schmidt, Venkatraman Narayanan, Dieter Fox
Predicting Human Trust in Robot Capabilities across Tasks
Trust plays a significant role in shaping our interactions with one another and with automation. In this work, we investigate how humans transfer or generalize trust in robot capabilities across tasks, even with limited observations. We first present results from a human-subjects study using a real-world Fetch robot performing household tasks, and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers. Based on our findings, we adopt a functional view of trust and develop two novel predictive models that capture trust evolution and transfer across tasks. Empirical results show our models---a neural network comprising Gated Recurrent Units as memory, and a Bayesian Gaussian process---to outperform existing models when predicting trust for previously unseen participants and tasks.
[Full Paper]
Harold Soh, Shu Pan, Chen Min, David Hsu
Probabilistically Safe Robot Planning with Confidence-Based Human Predictions
In order to safely operate around humans, robots can employ predictive models of human motion. Unfortunately, these models cannot capture the full complexity of human behavior and necessarily introduce simplifying assumptions. As a result, predictions may degrade whenever the observed human behavior departs from the assumed structure, which can have negative implications for safety. In this paper, we observe that how “rational” human actions appear under a particular model can be viewed as an indicator of that model’s ability to describe the human’s current motion. By reasoning about this model confidence in a real-time Bayesian framework, we show that the robot can very quickly modulate its predictions to become more uncertain when the model performs poorly. Building on recent work in provably-safe trajectory planning, we leverage these confidence-aware human motion predictions to generate assured autonomous robot motion. Our new analysis combines worst-case tracking error guarantees for the physical robot with probabilistic time-varying human predictions, yielding a quantitative, probabilistic safety certificate. We demonstrate our approach with a quadcopter navigating around a human.
[Full Paper]
Jaime F. Fisac, Andrea Bajcsy, Sylvia L. Herbert, David Fridovich-Keil, Steven Wang, Claire J. Tomlin, Anca D. Dragan
Push-Net: Deep Planar Pushing for Objects with Unknown Physical Properties
This paper introduces Push-Net, a deep neural network model, which can push novel objects of unknown phys- ical properties for the purpose of re-position or re-orientation. Pushing is a challenging problem as its outcome depends on numerous unknown physical properties of the object and the enviroment. What enables us humans to effectively push objects with unknown physical properties is our ability to reason over the history of push interactions with the current and previous objects. This motivates us to design Push-Net to incorporate history of push interactions using a recurrent neural network. In addition, we embed physics knowledge into Push-Net as constraints during training. This helps to select better actions to push objects more effectively. Push-Net was trained using only simulation data. We performed extensive simulation and real robotic experiments. Push-Net can achieve over 99% average success rate in simulation and over 98% for real robotic experiments. The results also show the superiority of Push-Net over the baselines in terms of robustness and efficiency.
[Full Paper]
Juekun Li, Wee Sun Lee, David Hsu
Reinforcement and Imitation Learning for Diverse Visuomotor Skills
We propose a general model-free deep reinforcement learning method and apply it to robotic manipulation tasks. Our approach leverages a small amount of demonstration data to assist a reinforcement learning agent. We train end-to-end visuomotor policies to learn a direct mapping from RGB camera inputs to joint velocities. We demonstrate that the same agent, trained with the same algorithm, can solve a wide variety of visuomotor tasks, where engineering a scripted controller can be laborious. Our experiments indicate that our reinforcement and imitation agent achieved significantly better performances than agents trained with reinforcement learning or imitation learning alone. We also illustrate that these policies, trained with large visual and dynamics variations, achieved preliminary successes in zero-shot sim2real transfer.
[Full Paper]
Yuke Zhu, Ziyu Wang, Josh Merel, Andrei Rusu, Tom Erez, Serkan Cabi, Saran Tunyasuvunakool, János Kramár, Raia Hadsell, Nando de Freitas, Nicolas Heess
RelaxedIK: Real-time Synthesis of Accurate and Feasible Robot Arm Motion
We present a real-time motion-synthesis method for robot manipulators, called RelaxedIK, that is able to not only accurately match end-effector pose goals as done by traditional IK solvers, but also create smooth, feasible motions that avoid joint-space discontinuities, self-collisions, and kinematic singularities. To achieve these objectives on-the-fly, we supplement the standard IK formulation as a weighted-sum non-linear optimization problem, such that motion goals in addition to end-effector pose matching can be encoded as terms in the sum. We present a normalization procedure such that our method is able to effectively make trade-offs to simultaneously reconcile many, and potentially competing, objectives. Using these trade-offs, our formulation allows features to be relaxed when in conflict with other features deemed more important at a given time. We compare performance against a state-of-the-art IK solver and a real-time motion-planning approach in several geometric and real-world tasks on seven robot platforms ranging from 5-DOF to 8-DOF. We show that our method achieves motions that effectively follow position and orientation end-effector goals without sacrificing motion feasibility, resulting in more successful execution of tasks than the baseline approaches.
[Full Paper]
Daniel Rakita, Bilge Mutlu, Michael Gleicher
Robust Obstacle Avoidance using Tube NMPC
This work considers the problem of avoiding obstacles for general nonlinear systems subject to disturbances. Obstacle avoidance is achieved by computing disturbance invariant sets along a nominal trajectory and ensuring these invariant sets do not intersect with obstacles. We develop a novel technique to compute approximate disturbance invariant sets for general nonlinear systems using a set of finite dimensional optimizations. A bi-level NMPC optimization strategy alternates between optimizing over the nominal trajectory and finding the disturbance invariant sets. Simulation results show that the proposed algorithm is able to generate disturbance invariant sets for standard 3D aerial and planar ground vehicles models, and the NMPC algorithm successfully computes obstacle avoidance trajectories using the disturbance invariant sets.
[Full Paper]
Gowtham Garimella, Matthew Sheckells, Joseph Moore, Marin Kobilarov
Robust Sampling Based Model Predictive Control with Sparse Objective Information
We present an algorithmic framework for stochastic model predictive control that is able to optimize non-linear systems with cost functions that have sparse, discontinuous gradient information. The proposed framework combines the benefits of sampling-based model predictive control with linearization-based trajectory optimization methods. The resulting algorithm consists of a novel utilization of Tube-based model predictive control. We demonstrate robust algorithmic performance on a variety of simulated tasks, and on a real-world fast autonomous driving task.
[Full Paper]
Grady Williams, Brian Goldfain, Paul Drews, Kamil Saigol, James Rehg, Evangelos Theodorou
Safe Motion Planning in Unknown Environments: Optimality Benchmarks and Tractable Policies
This paper addresses the problem of planning a safe (i.e., collision-free) trajectory from an initial state to a goal region when the obstacle space is a-priori unknown and is incrementally revealed online, e.g., through line-of-sight perception. Despite its ubiquitous nature, this formulation of motion planning has received relatively little theoretical investigation, as opposed to the setup where the environment is assumed known. A fundamental challenge is that, unlike motion planning with known obstacles, it is not even clear what an optimal policy to strive for is. Our contribution is threefold. First, we present a notion of optimality for safe planning in unknown environments in the spirit of comparative (as opposed to competitive) analysis, with the goal of obtaining a benchmark that is, at least conceptually, attainable. Second, by leveraging this theoretical benchmark, we derive a pseudo-optimal class of policies that can seamlessly incorporate any amount of prior or learned information while still guaranteeing the robot never collides. Finally, we demonstrate the practicality of our algorithmic approach in numerical experiments using a range of environment types and dynamics, including a comparison with a state of the art method. A key aspect of our framework is that it automatically and implicitly weighs exploration versus exploitation in a way that is optimal with respect to the information available.
[Full Paper]
Lucas Janson, Tommy Hu, Marco Pavone
Sampling-Based Approximation Algorithms for Reachability Analysis with Provable Guarantees
The successful deployment of many autonomous systems in part hinges on providing rigorous guarantees on their performance and safety through a formal verification method, such as reachability analysis. In this work, we present a simple-to-implement, sampling-based algorithm for reachability analysis that is provably optimal up to any desired approximation accuracy. Our method achieves computational efficiency by judiciously sampling a finite subset of the state space and generating an approximate reachable set by conducting reachability analysis on this finite set of states. We prove that the reachable set generated by our algorithm approximates the ground-truth reachable set for any user-specified approximation accuracy and analyze the computational complexity of our approximation scheme. As a corollary to our main method, we introduce an asymptotically-optimal, anytime algorithm for reachability analysis. We present simulation results that reaffirm the theoretical properties of our algorithm and demonstrate its effectiveness in real-world inspired scenarios.
[Full Paper]
Lucas Liebenwein, Cenk Baykal, Igor Gilitschenski, Sertac Karaman, Daniela Rus
SegMap: 3D Segment Mapping using Data-Driven Descriptors
When performing localization and mapping, working at the level of structure can be advantageous in terms of robustness to environmental changes and differences in illumination. This paper presents SegMap: a map representation solution to the localization and mapping problem based on the extraction of segments in 3D point clouds. In addition to facilitating the computationally intensive task of processing 3D point clouds, working at the level of segments addresses the data compression requirements of real-time single- and multi-robot systems. While current methods extract descriptors for the single task of localization, SegMap leverages a data-driven descriptor in order to extract meaningful features that can also be used for reconstructing a dense map of the environment and for extracting semantic information. This is particularly interesting for navigation tasks and for providing visual feedback to end-users such as robot operators, for example in search and rescue scenarios. These capabilities are demonstrated in multiple urban driving and search and rescue experiments. Our method leads to an increase of area under the ROC curve of 28.3% over current state of the art using eigenvalue-based features. We also obtain very similar reconstruction capabilities to a model trained for this specific task. The SegMap implementation will be made available open-source along with easy to run demonstrations.
[Full Paper]
Renaud Dubé, Andrei Cramariuc, Daniel Dugas, Juan Nieto, Roland Siegwart, Cesar Cadena
Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications
Often times, natural language commands issued to robots not only specify a particular target configuration or goal state but also outline constraints on how the robot goes about its execution. That is, the path taken to achieving some goal state is given equal importance to the goal state itself. One example of this could be instructing a wheeled robot to ``go to the living room but avoid the kitchen,'' in order to avoid scuffing the floor. This class of behaviors poses a serious obstacle to existing language understanding for robotics approaches that map to either action sequences or goal state representations. Due to the non-Markovian nature of the objective, approaches in the former category must map to potentially unbounded action sequences whereas approaches in the latter category would require folding the entirety of a robot's trajectory into a (traditionally Markovian) state representation, resulting in an intractable decision-making problem. To resolve this challenge, we use a recently introduced, probabilistic variant of the classic Linear Temporal Logic (LTL) as a goal specification language for a Markov Decision Process (MDP). While demonstrating that standard neural sequence-to-sequence learning models can successfully ground language to this semantic representation, we also provide analysis that highlights generalization to novel, unseen logical forms as an open problem for this class of model. We evaluate our system within a two simulated robot domains as well as on a physical robot, demonstrating accurate language grounding alongside a significant expansion in the space of interpretable robot behaviors.
[Full Paper]
Nakul Gopalan, Dilip Arumugam, Lawson Wong, Stefanie Tellex
Shared Autonomy via Deep Reinforcement Learning
In shared autonomy, user input is combined with semi-autonomous control to achieve a common goal. The goal is often unknown ex-ante, so prior work enables agents to infer the goal from user input and assist with the task. Such methods tend to assume some combination of knowledge of the dynamics of the environment, the user's policy given their goal, and the set of possible goals the user might target, which limits their application to real-world scenarios. We propose a deep reinforcement learning framework for model-free shared autonomy that lifts these assumptions. The key idea is using human-in-the-loop reinforcement learning with neural network function approximation to learn an end-to-end mapping from environmental observation and user input to agent action, with task reward as the only form of supervision. Controlled studies with users (n = 16) and synthetic pilots playing a video game and flying a real quadrotor demonstrate the ability of our algorithm to assist users with real-time control tasks in which the agent cannot directly access the user's private information through observations, but receives a reward signal and user input that both depend on the user's intent. The assistive agent learns to assist the user without access to this private information, implicitly inferring it from the user's input. This allows the assisted user to complete the task more effectively than the user or an autonomous agent could on their own. This paper is a proof of concept that illustrates the potential for deep reinforcement learning to enable flexible and practical assistive systems.
[Full Paper]
Siddharth Reddy, Anca Dragan, Sergey Levine
Sim-to-Real: Learning Agile Locomotion For Quadruped Robots
Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.
[Full Paper]
Jie Tan, Tingnan Zhang, Erwin Coumans, Atil Iscen, Yunfei Bai, Danijar Hafner, Steven Bohez, Vincent Vanhoucke
Simplifying Reward Design through Divide-and-Conquer
Designing a good reward function is essential to robot planning and reinforcement learning, however it can be both challenging and frustrating. The reward needs to work across multiple different environments, and that often requires many iterations of tuning. We introduce a novel divide-and-conquer approach that enables the designer to specify a reward separately for each environment. By treating these separate reward functions as observations about the underlying true reward, we derive an approach to infer a common reward across all environments. We conduct user studies in an abstract grid world domain and a motion planning domain for a 7-DOF manipulator that measure user effort and solution quality. We show that our method is faster, easier to use, and produces a higher quality solution than the typical method of designing a reward jointly across all environments. We additionally conduct a series of experiments that measure the sensitivity of these results to different properties of the reward design task such as number of environments, the number of feasible solutions per environment, and the fraction of the total features that vary within each environment. We find that independent reward design compares favorably with the standard, joint, reward design process but works best when the design problem can be divided into simpler subproblems.
[Full Paper]
Ellis Ratner, Dylan Hadfield-Menell, Anca Dragan
SurfelWarp: Efficient Non-Volumetric Single View Dynamic Reconstruction
We contribute a dense SLAM system that takes a live stream of depth images as input and reconstructs non-rigid deforming scenes in real time, without templates or prior models. In contrast to existing approaches, we do not maintain any volumetric data structures, such as truncated signed distance function (TSDF) fields or deformation fields, which are performance and memory intensive. Our system works with flat point (surfel) based representation of geometry, which can be directly acquired from commodity depth sensors. Standard graphics pipelines and general purpose GPU (GPGPU) computing are leveraged for all central operations: i.e., nearest neighbor maintenance, non-rigid deformation field estimation and fusion of depth measurements. Our pipeline inherently avoids expensive volumetric operations such as marching cubes, volumetric fusion and dense deformation field update, leading to significantly improved performance. Furthermore, the explicit and flexible surfel based geometry representation enables efficient tackling of topology changes and tracking failures, which makes our reconstructions consistent with updated depth observations. Our system allows robots maintain a scene description with non-rigidly deformed objects that potentially enables interactions with dynamic working environments.
[Full Paper]
Wei Gao, Russ Tedrake
The Critical Radius in Sampling-based Motion Planning
We develop a new analysis of sampling-based motion planning with uniform random sampling, which significantly improves upon the celebrated result of Karaman and Frazzoli (2011) and subsequent work. Particularly, we prove the existence of a critical connection radius proportional to $\Theta(n^{-1/d})$ for $n$ samples and $d$ dimensions: Below this value the planner is guaranteed to fail (similarly shown by the aforementioned work, ibid.). More importantly, for larger radius values the planner is asymptotically (near-)optimal (AO). Furthermore, our analysis yields an explicit lower bound of $1-O(n^{-1})$ on the probability of success. A practical implication of our work is that AO is achieved when each sample is connected to only $\Theta(1)$ neighbors. This is in stark contrast to previous work which requires $\Theta(\log n)$ connections, that are induced by a radius of order $\left(\frac{\log n}{n}\right)^{1/d}$. Our analysis is not restricted to PRM and applies to a variety of "PRM-based" planners, including RRG, FMT$^*$ and BTT. Continuum percolation plays an important role in our proofs.
[Full Paper]
Kiril Solovey, Michal Kleinbort
Toward Specification-Guided Active Mars Exploration for Cooperative Robot Teams
As a step towards achieving autonomy in space exploration missions, we consider a cooperative robotics system consisting of a copter and a rover. The goal of the copter is to explore an unknown environment so as to maximize knowledge about a science mission expressed in linear temporal logic that is to be executed by the rover. We model environmental uncertainty as a belief space Markov decision process and formulate the problem as a two-step stochastic dynamic program that we solve in a way that leverages the decomposed nature of the overall system. We demonstrate in simulations that the robot team makes intelligent decisions in the face of uncertainty.
[Full Paper]
Petter Nilsson, Sofie Haesaert, Rohan Thakker, Kyohei Otsu, Cristian-Ioan Vasile, Ali-akbar Agha-mohammadi, Richard M. Murray, and Aaron D. Ames
Trajectory Optimization On Manifolds with Applications to SO(3) and R3XS2
Manifolds are used in almost all robotics applications even if they are not explicitly modeled. We propose a differential geometric approach for optimizing trajectories on a Riemannian manifold with obstacles. The optimization problem depends on a metric and collision function specific to a manifold. We then propose our Safe Corridor on Manifolds (SCM) method of computationally optimizing trajectories for robotics applications via a constrained optimization problem. Our method does not need equality constraints, which eliminates the need to project back to a feasible manifold during optimization. We then demonstrate how this algorithm works on an example problem on SO(3) and a perception-aware planning example for visual-inertially guided robots navigating in 3 dimensions. Formulating field of view constraints naturally results in modeling with the manifold R3XS2 which cannot be modeled as a Lie group.
[Full Paper]
Michael Watterson, Sikang Liu, Ke Sun, Trey Smith, Vijay Kumar
View Selection with Geometric Uncertainty Modeling
Estimating positions of world points from features observed in images is a key problem in 3D reconstruction, image mosaicking, simultaneous localization and mapping and structure from motion. We consider a special instance in which there is a dominant ground plane $\mathcal{G}$ viewed from a parallel viewing plane $\mathcal{S}$ above it. Such instances commonly arise, for example, in aerial photography. Consider a world point $g \in \mathcal{G}$ and its worst case reconstruction uncertainty $\varepsilon(g,\mathcal{S})$ obtained by merging \emph{all} possible views of $g$ chosen from $\mathcal{S}$. We first show that one can pick two views $s_p$ and $s_q$ such that the uncertainty $\varepsilon(g,\{s_p,s_q\})$ obtained using only these two views is almost as good as (i.e. within a small constant factor of) $\varepsilon(g,\mathcal{S})$. Next, we extend the result to the entire ground plane $\mathcal{G}$ and show that one can pick a small subset of $\mathcal{S'} \subseteq \mathcal{S}$ (which grows only linearly with the area of $\mathcal{G}$) and still obtain a constant factor approximation, for every point $g \in \mathcal{G}$, to the minimum worst case estimate obtained by merging all views in $\mathcal{S}$. Finally, we present a multi-resolution view selection method which extends our techniques to non-planar scenes. We show that the method can produce rich and accurate dense reconstructions with a small number of views. Our results provide a view selection mechanism with provable performance guarantees which can drastically increase the speed of scene reconstruction algorithms. In addition to theoretical results, we demonstrate their effectiveness in an application where aerial imagery is used for monitoring farms and orchards.
[Full Paper]
Cheng Peng, Volkan Isler