Although ML and AI have advanced considerably, robots still struggle with performing practical household tasks, such as making tea in any kitchen. Recent work has shown that combining learning and planning by learning structured world models from data is a promising path towards enabling this capability. In this talk, I'll discuss our recent efforts to extend learning for planning approaches to the task and motion planning (TAMP) setting. I will present methods that only require a handful of demonstrations to invent abstractions in the form of symbolic predicates and operators, as well as neural samplers, to enable TAMP. I will show how these learned components not only support efficient decision-making via planning, but also zero-shot generalization to more challenging tasks. Finally, I will highlight the current methods' limitations and identify critical areas for future work necessary to scale this approach to physical robots tackling real-world tasks.