This research seeks to enable human-robot collaborative exploration of a-priori unknown enviornments using implicit coordination.
This research project develops mapping and planning methods for multirotors to enable memory-efficient exploration while generating a high-fidelity map of the environment.
This research project seeks to improve assistance to humans operating multirotors through narrow gaps and tunnels.
This research program develops a method for cave surveying in complete darkness with an autonomous aerial vehicle equipped with a depth camera for mapping, downward-facing camera for state estimation, and forward and downward lights.
This research develops an efficient method for fitting meshlet primitives to RGB-D data that achieves high geometric fidelity with minimal overlap, such that the spatial density of primitives is significantly reduced compared to surfels.
This research seeks to design control strategies that enable quadrotors to track aggressive trajectories precisely and accurately in the presence of external disturbances, unmodeled dynamics, and degraded state estimation.
In this work we employ the approximation of the ground underneath as being locally planar and utilize this property in concert with onboard inertial sensors for measuring attitude and a single laser beam based rangefinder to provide fast, robust odometry estimates.
This research develops a distributed planning approach for multi-robot information gathering and application to robotic exploration.
This research presents a multirotor architecture capable of aggressive autonomous flight and collision-free teleoperation in unstructured, GPS-denied environments.
This research presents a method of deriving occupancy at varying resolution by sampling from a distribution and raytracing to the camera position.
This research develops an efficient and distributed algorithm for planning for multi-robot sensor coverage.
In this work, we propose representing the world as a mixture of Gaussian distributions, which enables us to represent the environment as a compressed and succinct representation. We use the geometric properties of this representation to enable efficient collision checking the robot surroundings.
This research develops a method to determine position and orientation from successive depth or LiDAR sensor observations. The method represents the sensor observations as approximate continuous belief distributions. The results demonstrate superior results as compared to the state of the art.
In this work, we present a novel adaptive teleoperation approach that is amenable to mobile systems using motion primitives for long-duration teleoperation, such as exploration using mobile vehicles or walking for humanoid systems.
This research develops a full system for controlling multi-robot teams online, meaning that plans do not have to be designed prior to operation.
In this research, we develop a method to produce compressed representations of the environment that enable more efficient evaluation of information gain.
A robot or a team of robots operating in large known environments require accurate knowledge of their exact location in the environment, in order to execute complex tasks. Limited size, weight and power constraints lead to constraints on the on-board computational capacity of aerial robots. This work presents a Monte-Carlo based real-time localization framework capable of running on such robots, enabled by using a compressed representation of the environment point cloud.
This research describes a distributed planning approach for multi-robot information gathering and application to robotic exploration.
This research develops a controller that efficiently learns vehicle dynamics while at the same time efficiently ensuring that the vehicle satisfies state and input constraints in the presence of state uncertainty.
Adaptive teleoperation with incremental intent modeling.
This research presents a method by which a team of aerial robots can lift an unknown object by learning about the mass distribution of the object while it is still on the ground.
This work presents an experiment-driven aerodynamic disturbance modeling technique that leverages experiences from past flights to construct a predictive model of the exogenous forces acting on an aerial robot.
This research develops a method to leverage conditionally dependent sensor observations for multi-modal exploration.