This research innovates techniques for rapid navigation in forests, caves, and other cluttered, unstructured environments.
This research project leverages reinforcement learning to enable decentralized multirobot active search over large scales.
This research seeks to enable human-robot collaborative exploration of a-priori unknown enviornments using implicit coordination.
This research project develops mapping and planning methods for multirotors to enable memory-efficient exploration while generating a high-fidelity map of the environment.
This research project seeks to improve assistance to humans operating multirotors through narrow gaps and tunnels.
This research program develops a method for cave surveying in complete darkness with an autonomous aerial vehicle equipped with a depth camera for mapping, downward-facing camera for state estimation, and forward and downward lights.
This research presents a method of deriving occupancy at varying resolution by sampling from a distribution and raytracing to the camera position.
This research develops a method to determine position and orientation from successive depth or LiDAR sensor observations. The method represents the sensor observations as approximate continuous belief distributions. The results demonstrate superior results as compared to the state of the art.
This research develops a method to leverage conditionally dependent sensor observations for multi-modal exploration.