Grad school just started back for me today, so this is going to be a short post. Hopefully I’ll still be able to do three (hopefully high-quality) posts a week once things stabilize.
I ran into an issue today where one of the systems that I maintain at my day job had been rendered completely unusable by a series of code changes that had been committed over the last month. The particular system in question is mostly a web forms app, so it’s a bit difficult to unit test and therefore has very low test coverage. Because of that, it falls to the developer making changes to thoroughly test things manually in most cases. Today, it became evident that this unwritten rule of testing was no longer being followed AT ALL. In one case, the code modifications sort-of worked. It’s conceivable that the developer making the changes tested them, but failed to test thoroughly enough to notice that he completely broke the vast majority of the existing functionality for a particular operation. In another case, the code modifications didn’t work. Period. If you tried to view the page, you were greeted with a NullReferenceException. It didn’t matter how you navigated to the page or what the environment was, there was no possible way the code was ever going to work. That means the developer didn’t even bother to load the modified page up in his browser.
The result of these gaps in testing is a lot of pain and frustration for me personally, and wasted money for the company. I had to roll back an entire month of new features and changes to get to a stable build. That means our testers won’t have as much time to test new features, and it means that someone now has to go back and repair all the things that were broken. And it means that a 15 minute deployment took me three infuriating hours instead.
The morale of this story is: if you make a code change, no matter how seemingly minor, be DAMN sure you test it. If you absolutely positively can’t write a unit test for it, be sure you do very, very thorough manual testing. Don’t just fire-and-commit and assume it all works, because it probably doesn’t. As far as I’m concerned, if you can’t prove that it works, it doesn’t work, end of story. If you can’t test it, don’t commit it. Ask a more senior developer for advice instead.
What if it is the "senior" developer who is checking in the failing code?
I mean, this guy has been in the industry for a LONG time. He _should_ know better. For him, it’s more about getting it done quick rather than right. Maybe, if there is a problem, it will show up in QA. And then he can blame it on the testing methods, the database, random electrons, act of God, etc… anything but the code!
Wow, this is one of the stupidest blogs posts I’ve ever read, ever, about anything, ever. I’ve laid out the facts for anyone interested in how a real developer spends his* time:
http://robtechdiff.blogspot.com/2008/08/testing-waste-of-time-or-huge-waste-of.html
*That’s right, no "his/her" crap. We all know women can’t code.
@Evil Rob:
Man, you are so right! My problems yesterday weren’t because of lack of testing, it was because I *tested* the code after I tried to roll it out. If I had just pushed it to production without testing it, I wouldn’t have found any bugs. I’m sold. *Deletes all test fixtures*
@Bames Jond:
If a "senior" dev is checking in code that he hasn’t tested, I would argue that he isn’t a developer at all. Programmer != developer. Being a developer means being responsible for it all, including testing. But yeah, I don’t have any useful advice for you. I’ve known people like that, and I’ve yet to find a useful way to deal with them.