Brown Robotics Talks consists of BigAI talks and lab talks (CIT 115). From spring 2024, this website is created to track both talk series. All confirmed schedule will be posted, together with recordings of past talks (you’ll need a Brown account to access the recordings).
Schedule for Fall 2024
Date | Talk | Speaker |
---|---|---|
10/11 | Learning Interpretable Models for Autonomous Assessment of Taskable AI Systems [recording][abstract] | Pulkit Verma |
10/18 | What Foundation Models Can and Cannot Do for Bringing Helpful Robotic Assistants into Our Lives [recording][abstract] | Roberto Martin-Martin |
10/25 | Know your (model's) limits: Expanding robot capabilities by characterizing model deviation [recording] | Alex LaGrassa |
10/30 | Empowering Novice Users of Robots through Accessible Information and Pre-trained Policies [recording] | Isaac Sheidlower |
11/01 | Environment Curriculum Generation via Large Language Models [recording] | Jason Ma |
11/08 | Robot Planning and Manipulation: Symbols, Geometry, and Feasibility [recording] | Neil Dantam |
11/15 | Structured Learning for Planning and World Modeling in Complex Environments [recording] | Linfeng Zhao |
11/22 | TBD | Yiding Jiang |
12/6 | Planning and Exploration for Autonomous Systems with Aquatic Applications | Nare Karapetyan |
Spring 2024
Date | Talk | Speaker |
---|---|---|
01/26 | Toward Full-Stack Reliable Robot Learning for Autonomy and Interaction [abstract][recording] | Glen Chou |
02/02 | Deep Reinforcement Learning for Multi-Agent Interaction [abstract][recording] | Stefano V. Albrecht |
02/09 | Object-level Planning: Bridging the Gap between Human Knowledge and Task and Motion Planning [abstract][recording] | David Paulius |
03/01 | Towards Interactive Task and Motion Imitation [abstract][recording] | Felix Yanwei Wang |
03/08 | Towards Composable Scene Representations in Robotics and Vision [abstract][recording] | Ondrej Biza |
04/19 | Building General-Purpose Robots with Integrated Learning and Planning [abstract][recording] | Jiayuan Mao |
04/26 | Experiment Planning with Function Approximation [abstract] | Aldo Pacchiano |
Speakers
Speaker | Bio | |
---|---|---|
|
Pulkit Verma is a Postdoctoral Research Associate at the Interactive Robotics Group at MIT, where he works with Prof. Julie Shah. His research focuses on the safe and reliable behavior of taskable AI agents. He investigates the minimal set of requirements in an AI system that would enable a user to assess and understand the limits of its safe operability. He received his Ph.D. in Computer Science from Arizona State University, where he worked with Prof. Siddharth Srivastava. Prior to that, he completed his M.Tech. in Computer Science and Engineering at the Indian Institute of Technology Guwahati. He was awarded the Graduate College Fellowship at ASU in 2023 and received the Best Demo Award at the International Conference on Autonomous Agents and Multiagent Systems (AAMAS) in 2022. |
|
|
Roberto Martin-Martin is an Assistant Professor of Computer Science at the University of Texas at Austin. His research connects robotics, computer vision, and machine learning. He studies and develops novel AI algorithms that enable robots to perform tasks in human uncontrolled environments such as homes and offices. He creates novel decision-making solutions based on reinforcement learning, imitation learning, planning, and control in that endeavor. He explores topics in robot perception, such as pose estimation and tracking, video prediction, and parsing. Martin-Martin received his Ph.D. from the Berlin Institute of Technology (TUB) before a postdoctoral position at the Stanford Vision and Learning Lab under the supervision of Fei-Fei Li and Silvio Savarese. His work has been selected for the RSS Best Systems Paper Award, RSS Pioneer, ICRA Best Paper Award, Winner of the Amazon Picking Challenge, and has been a finalist for ICRA, RSS, and IROS Best Paper. He is chair of the IEEE Technical Committee in Mobile Manipulation and co-founder of QueerInRobotics. |
|
|
Glen Chou is a postdoctoral associate at MIT CSAIL, advised by Prof. Russ Tedrake. His research focuses on end-to-end safety and reliability guarantees for learning-enabled, human-centered robots. Previously, Glen received his PhD in Electrical and Computer Engineering from the University of Michigan in 2022, and dual B.S. degrees in Electrical Engineering and Computer Science and Mechanical Engineering from UC Berkeley in 2017. He is a recipient of the National Defense Science and Engineering Graduate (NDSEG) fellowship, the NSF Graduate Research fellowship, and is a Robotics: Science and Systems Pioneer. |
|
|
Dr. Stefano V. Albrecht is Associate Professor of Artificial Intelligence in the School of Informatics, University of Edinburgh, where he leads the Autonomous Agents Research Group. Dr. Albrecht is a Royal Society Industry Fellow working with Five AI/Bosch to develop AI technologies for autonomous vehicles; and he is a Royal Academy of Engineering Industrial Fellow working with KION/Dematic to develop reinforcement learning solutions for multi-robot warehouse systems. His research on reinforcement learning and multi-agent interaction has been published in leading conferences and journals for AI/ML/robotics, including NeurIPS, ICML, ICLR, IJCAI, AAAI, UAI, AAMAS, AIJ, JAIR, JMLR, TMLR, ICRA, IROS, T-RO. Previously, Dr. Albrecht was a postdoctoral fellow at the University of Texas at Austin. He obtained PhD and MSc degrees in Artificial Intelligence from the University of Edinburgh, and a BSc degree in Computer Science from Technical University of Darmstadt. He is co-author of the book “Multi-Agent Reinforcement Learning: Foundations and Modern Approaches” (MIT Press). |
|
|
David Paulius is a postdoctoral researcher in the Intelligent Robot Lab at Brown University, working jointly with Professors George Konidaris and Stefanie Tellex. He received his PhD in Computer Science & Engineering from the University of South Florida, where he was advised by Professor Yu Sun and began his journey in robotics and AI. He received his BSc in Computer Science from the University of the Virgin Islands; St. Thomas, USVI. He was recognized as a Robotics: Science and Systems (RSS) Pioneer in 2022. David is interested in enabling robots to solve complex goal-directed tasks in human-centered environments by exploiting semantic knowledge representations to facilitate seamless planning and execution for long-term autonomy and generalization across robots, tasks, and domains. |
|
|
Felix Yanwei Wang is a fifth-year PhD candidate from MIT EECS, working with Prof. Julie Shah, and is a Work of the Future Fellow in Generative AI at MIT. He has spent time working with Prof. Dieter Fox at the Nvidia Robotics Lab. Before his PhD, he obtained his master’s degree in robotics at Northwestern University, working with Prof. Todd Murphey and Prof. Mitra Hartmann. His research goal is to design LfD methods that are inherently interactive—i.e., admissible to real-time human feedback and yet remain close to the demonstration manifold in some measure—so that humans can easily modify a pre-trained policy without needing to reteach the robot for new tasks. |
|
|
Ondrej Biza is a 5th year PhD Candidate at Northeastern University and an Intern at the Boston Dynamics AI Institute. His research focuses on learning structured representations in robotics and computer vision, including learning to discover and represent objects and learning and transferring robotic manipulation policies. Previously, Ondrej was a Student Researcher at Google Research in the Brain team in Amsterdam and an undergraduate researcher at the Czech Technical University in Prague. |
|
|
Jiayuan Mao is a Ph.D. student at MIT, advised by Professors Josh Tenenbaum and Leslie Kaelbling. Her research aim is to build machines that can continually learn concepts (e.g., properties, relations, rules, and skills) from their experiences and apply them for reasoning and planning in the physical world. Her research topics include visual reasoning, robotic manipulation, scene and activity understanding, and language acquisition. |
|
|
Aldo Pacchiano is an Assistant Professor at the Boston University Center for Computing and Data Sciences and a Fellow at the Eric and Wendy Schmidt Center of the broad institute of MIT and Harvard. He obtained his PhD under the supervision of Profs. Michael Jordan and Peter Bartlett at UC Berkeley and was a Postdoctoral Researcher at Microsoft Research, NYC. His research lies in the areas of Reinforcement Learning, Online Learning, Bandits and Algorithmic Fairness. He is particularly interested in furthering our statistical understanding of learning phenomena in adaptive environments and use these theoretical insights and techniques to design efficient and safe algorithms for scientific, engineering, and large-scale societal applications. |