A fundamental obstacle to developing autonomous, goal-directed robots lies in grounding high-level knowledge (from various sources like language models, recipe books, and internet videos) in perception, action, and reasoning. Approaches like task and motion planning (TAMP) promise to reduce the complexity of robotic manipulation for complex tasks by integrating higher-level symbolic planning (task planning) with lower-level trajectory planning (motion planning). However, task-level representations must include a substantial amount of information expressing the robot's own constraints, mixing logical object-level requirements (e.g., a bottle must be open to be poured) with robot constraints (e.g., a robot's gripper must be empty before it can pick up an object). I propose an additional level of planning that naturally exists above the current TAMP pipeline which I call object-level planning (OLP). OLP exploits rich, object-level knowledge to bootstrap task-level planning by generating informative plan sketches. I will show how object-level plan sketches can initialize TAMP processes via bootstrapping, where PDDL domain and problem definitions are created for manipulation planning and execution.