Learning from demonstration (LfD) methods have shown impressive success in solving long-horizon manipulation tasks. However, few of them are designed to be interactive as physical interactions from humans might exacerbate the problem of covariate shift. Consequently, these policies are often executed open-loop and are restarted when they erred. To learn imitation policies that support real-time physical human-robot interaction, we draw inspiration from task and motion planning and temporal logic planning to formulate task and motion imitation: continuous motion imitation that satisfies discrete task constraints implicit in the demonstrations. We introduce algorithms that use either linear temporal logic specification or priors from large language models to robustify few-shot imitation, handling out-of-distribution scenarios that are typical during human-robot interactions.