Obviously, some testing of code needs to be done before it's released. This can be anywhere from "it compiles, ship it!" to the extensive rigor NASA software goes through. Some of this stuff seems over the top, even to the strictest of developers:
Take the upgrade of the software to permit the shuttle to navigate with Global Positioning Satellites, a change that involves just 1.5% of the program, or 6,366 lines of code. The specs for that one change run 2,500 pages, a volume thicker than a phone book. The specs for the current program fill 30 volumes and run 40,000 pages.
Yikes. And I thought the the fact that I had to deal with code that was released without testing was bad. Can you imagine writing just under a page of details for every line of code? Obviously, this isn't the standard. In fact, NASA's shuttle team is only one of four teams in the world that live up to the Software Engineering Institute's Level 5 standards.
We'd all love to write bug-free code. The problem is, we all don't have the time or patience to write code like the NASA team. They do exemplify some best practices for us (source control/bug tracking) but their overboard attitude toward bugs just doesn't work for a small feature on a deadline.
Anthony Ferrara brings up an interesting point. While your software may not have actual lives riding on it, some of it is pretty important. The key is to divide up your features and test them according to your importance. Some things are mission-critical and others are just icing. Sort those out and test accordingly.