Building representations to enable robots to complete complex and abstract tasks

Liam Paull - University of Montreal

Jan. 31, 2025, 2:30 p.m. - Jan. 31, 2025, 3:30 p.m.

MAASS 217

Hosted by: Hsiu-Chin Lin


The choice of how we represent the world has been a central one in robotics perception since its inception. On one hand, we want representations that are rich and expressive in terms of the geometric and semantic information that they contain.  On the other hand, we need to be able to query these representations efficiently in order for them to be useful for downstream planning tasks. In this seminar, I will describe two research threads from our lab (the Montreal robotics and AI Lab) that make particular choices about how to balance this tradeoff. I will present a line of work (Lifelong Topological Visual Navigation and One-4-All) that leverages a topological representation which has several benefits. Subsequently, I will discuss recent work (ConceptFusion and ConceptGraphs) on integrating the common sense knowledge encoded in large language and vision-language models into actionable robotics representations. There will also be lots of time discussion. 

Liam Paull is an associate professor at l'Université de Montréal and the founder and co-lead of the Montreal Robotics and Embodied AI Lab (REAL). His lab focuses on robotics problems including building representations of the world (such as for simultaneous localization and mapping), modeling of uncertainty, and building better workflows to teach robotic agents new tasks (such as through simulation or demonstration). Previous to this, Liam was a research scientist and postdoc at CSAIL MIT.  He obtained his PhD from the University of New Brunswick in 2013 where he worked on robust and adaptive planning for underwater vehicles. He is a co-founder and director of the Duckietown Foundation, which is dedicated to making engaging robotics learning experiences accessible to everyone.