There is a very good reason why the book stresses writing unit tests, because they do help. I work on a product called VTS and have to deal with poorly written code that was never intended to ever be unit tested. The code was written 10 years ago by Electrical Engineers and System administrators who had no idea what they were doing. Most parts of the code are nearly incomprehensible. The way it was designed was terrible. Now you might think that it would be a good idea to rewrite the code. This doesn't work in the real world. The team tries to maintain the code while implementing new features. There isn't time to make the code better and it costs far too much money to do so. While rewriting parts of the code will destroy functionality of it and break the product and also create new bugs.
So what I do is try to create tests that effectively cover every part of a subprogram, hitting all executable parts, all the branches, take every possible branch and try to return all possibilities. Like most products, it has calls to outside functions and requires specific data and files. It is not like the one call to the outside calls a little nice function that does some small print statements or compute some primes. No, it runs a 3000 line program, that requires other programs to run first, specific variables set correctly as a result, specific objects to be continuously built before and during execution. And an large amount of global variables that extend through nearly 16 other programs make it not fun to test effectively. Then it can run the utility that I'm testing after doing all of that.
This creates a problem. I don't really care what the thing is doing outside of the utility, or waiting 15 minutes to generate usable data before I can run a test. So it was decided to stub out or redefine any calls to the outside and falsify them and return false data. This is the only real way for me to test anything. Because if I don't, I'm creating functional tests. I really only want to test the logic of the utility.
So, inside of a the Unit test file, I do a #include of the file I'm wanting to test. Do a #define on all outside methods and main(). Now I can test. I create false data (data that I wrote to test the utility, which can be good, bad or just wrong) and feed it through, hoping to hit all statements and hope to get a return code that is expected. What is important to do first is to try to hit all the returns and get good data back, ensuring that it works as specified. Then I want to destroy it. Make it break, make it fail, make it not work. Only until I do that I can find all of the bugs, increase preconditions and make it more stable.
I use CUnit to drive the tests, create a suite for each method I test, a Test case for each return expected. The do the same for trying to fail tests. The easiest methods to test are the ones that return a boolean value. Because of this you don't care specifically what it returns and have to match it to an expected value in the asserts. You can just use Assert() for when it is true, and Assert_False() when it is false. It defines what you want to return, true or false and no real thinking of what it is returning, just how to break it.
You want to obviously test the functions to ensure they work, but you also want to use Unit Testing to make it fail and identify bugs that you won't think about. Give your ints a wide variety from the min to the max. Make your if's fail. Control how the program works to break it to make it more robust. It is difficult to test all conditions, you will probably create tests that never intended the thing to run like that, create bugs that won't ever run due to weird data. But if you are working on code that is low enough, this must be done because the programmers working on the high level stuff can easily pass in bad things continuously unknowing.
Next week: More in depth discussion of Unit Testing.
No comments:
Post a Comment