The 7 Habits talks of considering unit tests after completion of every quarter of your task and having a test completed by the end of the task. When you write a test, how should it be written? Should it be a full fledged test that can be run and verified with automated testing frameworks or should it be informal tests? While the answer may vary with the size of development groups and the complexity of applications, I like to think of 3 stages in writing unit tests - stages which progressively lead to complete tests.
Stage 1: Crash Test - The first stage is to write a test that can simply verify that things don't crash i.e. no unexpected exceptions are thrown and no abnormal conditions occur. Here's a simple test that invokes a service on another server that returns a value object. Successful running of this test merely ensures that nothing is fundamentally broken.
@Test
public void testReport() throws M13Exception
{
Report report = (Report)svs.getReport("foo");
}
Stage 2: Eyeball Test - The next stage is to examine the results of invoking the functionality being tested. In the example test below, the contents of the
Report
object returned from the server is printed to a log output. What this test lets you do is to "eyeball" the results printed to get some sense of whether the functionality is working right or not. This stage is an extension to the Crash Test stage.@Test
public void testReport() throws M13Exception
{
Report report = (Report)svs.getReport("foo");
logger.info("Report: " + report.toXML());
}
Stage 3: Automated Result-Comparison Test - This is the final stage in writing a test and extends the Eyeball Test stage. In this stage, you write code that compares the results of invoking the functionality being tested with pre-defined, expected results. In the example below, the XML representation of the
Report
object got from the server is compared against a XML data file; the test passes if the XML matches the XML in the data file and fails otherwise. Once this stage is done, the test is complete.@Test
public void testReport() throws M13Exception
{
Report report = (Report)svs.getReport("foo");
String fooXML = report.toXML();
logger.info("Report: " + fooXML);
// read from data file to compare results with
String reportXML = FileUtils.readFileToString(file, "UTF-8");
assertTrue(fooXML.equals(reportXML));
}
One of the common mistakes people make is to attempt to get to the third stage directly right from the beginning. While this may work for some simpler tasks, for more complex tasks and software, this results in constantly having to change the data set being compared against since it is likely that the definition of the data is changing continuously until the later stages of development.
So the best practice may be to ensure you have Crash tests and Eyeball tests in place after each task (making sure you at least think of testing at each quarter of the task). If the data sets are stable enough, you can also write Automated Result-Comparison tests at this stage. If not, come back to this stage later in the development cycle.
Note that for all stages you employ a formal testing framework such as JUnit. So you start right from scratch with a formalized test that can be run in an automated fashion even if you are only at the Crash Test Stage. This way you are simply expanding on and extending the tests at each stage.
Happy testing! ... and give those QA guys a break!
No comments:
Post a Comment