Early Career Spotlights
Maya Cakmak
University of Washington
Wed. June 27, 14:00
Title: Robot Programming for All
Abstract:
Robots that can assist humans in everyday tasks have the potential to improve
people’s quality of life and bring independence to persons with disabilities. A
key challenge in realizing such robots is programming them to meet the unique
and changing needs of users and to robustly function in their unique
environments. Most research in robotics targets this challenge by attempting to
develop universal or adaptive robotic capabilities. This approach has had
limited success because it is extremely difficult to anticipate all possible
scenarios and use-cases for general-purpose robots or collect massive amounts of
data that represent each scenario and use-case. Instead, my research aims to
develop robots that can be programmed in-context and by end-users after they are
deployed, tailoring it for the specific environment and user preferences. To
that end, my students and I have been developing new techniques and tools that
allow intuitive and rapid programming of robots to do useful tasks. In this talk
I will introduce some of these techniques and tools, demonstrate their
capabilities, and discuss some of the challenges in making them work in the
hands of potential users and deploy them in the real world.
Biography: Maya Cakmak is an Assistant Professor at the University of
Washington, Computer Science & Engineering Department, where she directs the
Human-Centered Robotics lab. She received her PhD in Robotics from the Georgia
Institute of Technology in 2012, after which she spent a year as a post-doctoral
research fellow at Willow Garage, one of the most influential robotics
companies. Her research interests are in human-robot interaction, end-user
programming, and assistive robotics. Her work aims to develop robots that can be
programmed and controlled by a diverse group of users with unique needs and
preferences to do useful tasks. Maya's work has been published at major Robotics
and AI conferences and journals, demonstrated live in various venues and has
been featured in numerous media outlets. Tools that she and her students
developed are currently being used by robotics companies like Savioke and Fetch
Robotics. She received an NSF CAREER award in 2016 and a Sloan Research
Fellowship in 2018.
Stefanie Tellex
Brown University
Wed. June 27, 16:00
Title: Learning Models of Language, Action and Perception for
Human-Robot Collaboration
Abstract: Robots can act as a force multiplier for people, whether a
robot assisting an astronaut with a repair on the International Space station, a
UAV taking flight over our cities, or an autonomous vehicle driving through our
streets. To achieve complex tasks, it is essential for robots to move beyond
merely interacting with people and toward collaboration, so that one person can
easily and flexibly work with many autonomous robots. The aim of my research
program is to create autonomous robots that collaborate with people to meet
their needs by learning decision-theoretic models for communication, action, and
perception. Communication for collaboration requires models of language that map
between sentences and aspects of the external world. My work enables a robot to
learn compositional models for word meanings that allow a robot to explicitly
reason and communicate about its own uncertainty, increasing the speed and
accuracy of human-robot communication. Action for collaboration requires models
that match how people think and talk, because people communicate about all
aspects of a robot's behavior, from low-level motion preferences (e.g., "Please
fly up a few feet") to high-level requests (e.g., "Please inspect the
building"). I am creating new methods for learning how to plan in very large,
uncertain state-action spaces by using hierarchical abstraction. Perception for
collaboration requires the robot to detect, localize, and manipulate the objects
in its environment that are most important to its human collaborator. I am
creating new methods for autonomously acquiring perceptual models in situ so the
robot can perceive the objects most relevant to the human's goals. My unified
decision-theoretic framework supports data-driven training and robust,
feedback-driven human-robot collaboration.
Biography: Stefanie Tellex is the Joukowsky Family Assistant Professor
of Computer Science and Assistant Professor of Engineering at Brown University.
Her group, the Humans To Robots Lab, creates robots that seamlessly collaborate
with people to meet their needs using language, gesture, and probabilistic
inference, aiming to empower every person with a collaborative robot. She
completed her Ph.D. at the MIT Media Lab in 2010, where she developed models for
the meanings of spatial prepositions and motion verbs. Her postdoctoral work at
MIT CSAIL focused on creating robots that understand natural language. She has
published at SIGIR, HRI, RSS, AAAI, IROS, ICAPs and ICMI, winning Best Student
Paper at SIGIR and ICMI, Best Paper at RSS, and an award from the CCC Blue Sky
Ideas Initiative. Her awards include being named one of IEEE Spectrum's AI's 10
to Watch in 2013, the Richard B. Salomon Faculty Research Award at Brown
University, a DARPA Young Faculty Award in 2015, a NASA Early Career Award in
2016, a 2016 Sloan Research Fellowship, and an NSF Career Award in 2017. Her
work has been featured in the press on National Public Radio, BBC, MIT
Technology Review, Wired and Wired UK, as well as the New Yorker. She was named
one of Wired UK's Women Who Changed Science In 2015 and listed as one of MIT
Technology Review's Ten Breakthrough Technologies in 2016.
Sergey Levine
University Of California, Berkeley
Thurs. June 28, 14:30
Title: Robots that learn and improve through experience
Abstract: Advances in machine learning have made it possible to build algorithms that can
make complex and accurate inferences for open-world perception problems, such as
recognizing objects in images or recognizing words in human speech. These
advances have been enabled by improvements in models and algorithms, such as
deep neural networks, advances in the amount of available computation and,
crucially, the availability of large amounts of manually-labeled data. Robotics
presents a major challenge for these technologies, but also a major opportunity:
robots can interact autonomously with the world, which can allow them to learn
behavioral skills and a data-driven understanding of physical phenomena with
minimal human supervision. This can in principle provide us with a scalable
mechanism by which robots can acquire large repertoires of generalizable skills,
bringing us closer to general-purpose robotic systems that can fulfill a wide
range of tasks and goals. In this talk, I will discuss how autonomous robotic
learning can give rise to behavioral skills that generalize effectively without
detailed human supervision, as well as the major challenges in our current
approaches to this problem and perspectives on future directions.
Biography: Sergey Levine received a BS and MS in Computer Science from
Stanford University in 2009, and a Ph.D. in Computer Science from Stanford
University in 2014. He joined the faculty of the Department of Electrical
Engineering and Computer Sciences at UC Berkeley in fall 2016. His work
focuses on machine learning for decision making and control, with an emphasis
on deep learning and reinforcement learning algorithms. Applications of his
work include autonomous robots and vehicles, as well as computer vision and
graphics. His research includes developing algorithms for end-to-end training
of deep neural network policies that combine perception and control, scalable
algorithms for inverse reinforcement learning, deep reinforcement learning
algorithms, and more. His work has been featured in many popular press
outlets, including the New York Times, the BBC, MIT Technology Review, and
Bloomberg Business.