Robots must behave safely and reliably if we are to confidently deploy them in the real world around humans. To complete tasks, robots must manage a complex, interconnected autonomy stack of perception, planning, and control modules. While machine learning has unlocked the potential for holistic, full-stack control in the real world, these methods can be catastrophically unreliable. In contrast, model-based safety-critical control provides rigorous guarantees, but struggles to scale to real systems, where common assumptions on the stack, e.g., perfect task specification and perception, break down. In this talk, I will argue that we need not choose between real-world utility and safety: by taking a full-stack approach to safety-critical control that leverages learned components where they can be trusted, we can build practical yet rigorous algorithms that can make real robots more reliable. I will first discuss how to make task specification easier and safer by learning hard constraints from human task demonstrations, and how we can plan safely with these learned specifications despite uncertainty. Then, given a task specification, I will discuss how we can reliably leverage learned dynamics and perception for planning and control by estimating where these learned models are accurate, enabling probabilistic guarantees for full-stack vision-based control. Finally, I will provide perspectives on open challenges and future opportunities, including robust perception-based hybrid control algorithms for reliable robotic manipulation and human-robot collaboration.