ITK/Release 4/Modularization/ Add tests: Difference between revisions

From KitwarePublic
Jump to navigationJump to search
(Added section "Writing tests")
 
(7 intermediate revisions by 4 users not shown)
Line 1: Line 1:
The safe assumption behind testing is that "if it is not tested, it is broken".
== Non-regression tests ==
== Non-regression tests ==


Line 8: Line 10:
  add_executable(itkBinaryThresholdFilterTest itkBinaryThresholdFilterTest.cxx)
  add_executable(itkBinaryThresholdFilterTest itkBinaryThresholdFilterTest.cxx)


  target_link_libraries(itkBindaryThresholdFilterTest ${ITK-BinaryThreshold_LIBRARIES})
  target_link_libraries(itkBinaryThresholdFilterTest ${ITK-BinaryThreshold_LIBRARIES})
        
        
  # adopt the modern CMake syntax :[NAME, COMMAND]
  # adopt the modern CMake syntax :[NAME, COMMAND]


  add_test(NAME itkBinaryThresholdFilterTest   
  itk_add_test(NAME itkBinaryThresholdFilterTest   
         COMMAND itkBinaryThresholdFilterTest input.png output.png)
         COMMAND itkBinaryThresholdFilterTest input.png output.png)
|lang=cmake}}
|lang=cmake}}




== Regression tests: ==
 
== Regression tests ==


=== Approach A ===
=== Approach A ===
Line 33: Line 36:
|
|


   set(ITK-FooFilterTests
   set(ITKFooFilterTests
     itkFooFilterTest1.cxx
     itkFooFilterTest1.cxx
     itkFooFilterTest2.cxx)
     itkFooFilterTest2.cxx)


   CreateTestDriver(ITK-FooFilter "${ITK_FooFilter-Test_LIBRARIES}"  "${ITK-FooFilterTest}")
   CreateTestDriver(ITKFooFilter "${ITK_FooFilter-Test_LIBRARIES}"  "${ITK-FooFilterTest}")


   add_test(NAME  itkFooFilterTest1
   itk_add_test(NAME  itkFooFilterTest1
           COMMAND  itk-FooFilterTestDriver itkFooFilterTest1)
           COMMAND  ITKFooFilterTestDriver itkFooFilterTest1)
   
   
   add_test(NAME  itkFooFilterTest2
   itk_add_test(NAME  itkFooFilterTest2
           COMMAND  itk-FooFilterTestDriver itkFooFilterTest2)
           COMMAND  ITKFooFilterTestDriver itkFooFilterTest2)




Line 53: Line 56:


     ITK/Modules/IO/BMP/test/CMakeLists.txt
     ITK/Modules/IO/BMP/test/CMakeLists.txt


=== Approach B ===
=== Approach B ===
Line 68: Line 70:
   target_line_libraries( itkFooTest ${ITK_LIBRARIES})
   target_line_libraries( itkFooTest ${ITK_LIBRARIES})


   add_test(NAME  itkFooTest
   itk_add_test(NAME  itkFooTest
             COMMAND  itkTestDriver
             COMMAND  itkTestDriver
             --compare  outputBaseline.png output.png
             --compare  outputBaseline.png output.png
Line 77: Line 79:
The new syntax
The new syntax


     :$<TARGET_FILE: xxx>  
     $<TARGET_FILE: xxx>  


in the add_test command triggers CMake to locate the executable automatically. No need to specify the path for the executable here.
in the add_test command triggers CMake to locate the executable automatically. No need to specify the path for the executable here.
Line 86: Line 88:


   ITK/Examples/Filtering/test/CMakeLists.txt
   ITK/Examples/Filtering/test/CMakeLists.txt
== Add testing data to ITK ==
  Please refer to: [http://www.itk.org/Wiki/index.php?title=ITK/Git/Develop/Data&oldid=41775 | Add Data]
== Writing tests ==
=== Unit tests ===
Writing a unit test helps ensuring that the class being exercised
* works as expected,
* works invariably along time,
* works invariably across different platforms.
It also helps in detecting potential issues, and checking how a given class is affected by changes introduced in the code.
A direct consequence of writing and executing a test is that the [http://www.vtk.org/Wiki/Special:WhatLinksHere/ITK/Code_Coverage_Practices code coverage] is improved.
When writing a unit test you should consider the following principles:
* Make sure that all methods of the class being tested are exercised.
* Make sure that if the class is templated, a number of different instances with different arguments are created so that the class is thoroughly tested.
* Make sure your test returns an EXIT_SUCCESS value when succeeding and an EXIT_FAILURE value when failing.
** When a test checks multiple features that individually may return either an EXIT_SUCCESS or an EXIT_FAILURE code, make sure that the whole test is run, and return the code in the end. This will help in locating more issues in the same run.
<source lang="cpp">
// appropriate include files
int main( int, char *[] )
{
  int testPassStatus = EXIT_SUCCESS;
  //
  // ...
  //
  if( itk::Math::FloatDifferenceULP( floatRepresentationfx1.asFloat, floatRepresentationfx2.asFloat ) != -1 )
    {
    std::cout << "Unexpected float distance." << std::endl;
    testPassStatus = EXIT_FAILURE;
    }
  if( itk::Math::FloatAlmostEqual( floatRepresentationfx1.asFloat, floatRepresentationfx2.asFloat ) )
    {
    std::cout << "floatRepresentationfx1 is almost equal to floatRepresentationfx2\n" << std::endl;
    }
  else
    {
    std::cout << "floatRepresentationfx1 is NOT almost equal to floatRepresentationfx2\n" << std::endl;
    testPassStatus = EXIT_FAILURE;
    }
  return testPassStatus;
}
</source>
instead of
<source lang="cpp">
// appropriate include files
int main( int, char *[] )
{
  int testPassStatus = EXIT_SUCCESS;
  //
  // ...
  //
  if( itk::Math::FloatDifferenceULP( floatRepresentationfx1.asFloat, floatRepresentationfx2.asFloat ) != -1 )
    {
    std::cout << "Unexpected float distance." << std::endl;
    return EXIT_FAILURE;
    }
  if( itk::Math::FloatAlmostEqual( floatRepresentationfx1.asFloat, floatRepresentationfx2.asFloat ) )
    {
    std::cout << "floatRepresentationfx1 is almost equal to floatRepresentationfx2\n" << std::endl;
    }
  else
    {
    std::cout << "floatRepresentationfx1 is NOT almost equal to floatRepresentationfx2\n" << std::endl;
    return EXIT_FAILURE;
    }
  return testPassStatus;
}
</source>
* When an exception can be thrown by a class, write a condition (in a try/catch block) in the test that is able to reproduce the exception condition.
* When appropriate, exercise a given method a number of times (with different arguments) so that the method is thoroughly tested.
* When required, print messages to the standard output (or error output) so that any errors happened can be checked by a human reader.
* Whenever possible, test against a baseline (ground truth): if an object is not tested against a baseline, we will not know whether it is working as expected, even if its code coverage is 100%.
* Use the testing macros defined in itkTestingMacros.h. Particularly, make sure that the macro EXERCISE_BASIC_OBJECT_METHODS is called right after the creation of the object whose class is being tested.

Latest revision as of 15:01, 16 April 2016

The safe assumption behind testing is that "if it is not tested, it is broken".

Non-regression tests

The same way it used to be, except the newer syntax in "add_test" command:

<syntaxhighlight lang="cmake">

add_executable(itkBinaryThresholdFilterTest itkBinaryThresholdFilterTest.cxx)
target_link_libraries(itkBinaryThresholdFilterTest ${ITK-BinaryThreshold_LIBRARIES})
     
# adopt the modern CMake syntax :[NAME, COMMAND]
itk_add_test(NAME itkBinaryThresholdFilterTest  
        COMMAND itkBinaryThresholdFilterTest input.png output.png)

</syntaxhighlight>


Regression tests

Approach A

Using the test driver macro The CMake macro "CreateTestDriver" defined in

 Module/Core/TestKernal/CreateTestDriver.cmake

is designed for grouped regression tests in ITK.

All the tests in one module share one test driver.

<syntaxhighlight lang="cmake">

 set(ITKFooFilterTests
    itkFooFilterTest1.cxx
    itkFooFilterTest2.cxx)
 CreateTestDriver(ITKFooFilter  "${ITK_FooFilter-Test_LIBRARIES}"  "${ITK-FooFilterTest}")
 itk_add_test(NAME  itkFooFilterTest1
          COMMAND  ITKFooFilterTestDriver  itkFooFilterTest1)

 itk_add_test(NAME  itkFooFilterTest2
          COMMAND  ITKFooFilterTestDriver  itkFooFilterTest2)


</syntaxhighlight>


You can find Approach A used in most of the ITK modules. for example in

   ITK/Modules/IO/BMP/test/CMakeLists.txt

Approach B

Stand-alone regression tests

There is also a test driver executable("itkTestdriver") designed for stand-alone regression tests. For instance, all the tests in Examples are stand-alone programs each has its own main function call, in which case, the approach A is not appropriate.


<syntaxhighlight lang="cmake">

  add_exectuable(itkFooTest itkFooTest.cxx)
  target_line_libraries( itkFooTest ${ITK_LIBRARIES})
  itk_add_test(NAME  itkFooTest
           COMMAND  itkTestDriver
           --compare  outputBaseline.png output.png
                      $<TARGET_FILE:itkFooFilterTest> input.png output.png)

</syntaxhighlight>


The new syntax

    $<TARGET_FILE: xxx> 

in the add_test command triggers CMake to locate the executable automatically. No need to specify the path for the executable here.

You can find Approach B used in Examples.

For example in

 ITK/Examples/Filtering/test/CMakeLists.txt


Add testing data to ITK

  Please refer to: | Add Data

Writing tests

Unit tests

Writing a unit test helps ensuring that the class being exercised

  • works as expected,
  • works invariably along time,
  • works invariably across different platforms.

It also helps in detecting potential issues, and checking how a given class is affected by changes introduced in the code.

A direct consequence of writing and executing a test is that the code coverage is improved. When writing a unit test you should consider the following principles:

  • Make sure that all methods of the class being tested are exercised.
  • Make sure that if the class is templated, a number of different instances with different arguments are created so that the class is thoroughly tested.
  • Make sure your test returns an EXIT_SUCCESS value when succeeding and an EXIT_FAILURE value when failing.
    • When a test checks multiple features that individually may return either an EXIT_SUCCESS or an EXIT_FAILURE code, make sure that the whole test is run, and return the code in the end. This will help in locating more issues in the same run.

<source lang="cpp"> // appropriate include files

int main( int, char *[] ) {

 int testPassStatus = EXIT_SUCCESS;
 //
 // ...
 //
 if( itk::Math::FloatDifferenceULP( floatRepresentationfx1.asFloat, floatRepresentationfx2.asFloat ) != -1 )
   {
   std::cout << "Unexpected float distance." << std::endl;
   testPassStatus = EXIT_FAILURE;
   }
 if( itk::Math::FloatAlmostEqual( floatRepresentationfx1.asFloat, floatRepresentationfx2.asFloat ) )
   {
   std::cout << "floatRepresentationfx1 is almost equal to floatRepresentationfx2\n" << std::endl;
   }
 else
   {
   std::cout << "floatRepresentationfx1 is NOT almost equal to floatRepresentationfx2\n" << std::endl;
   testPassStatus = EXIT_FAILURE;
   }
 return testPassStatus;

} </source>

instead of

<source lang="cpp"> // appropriate include files

int main( int, char *[] ) {

 int testPassStatus = EXIT_SUCCESS;
 //
 // ...
 //
 if( itk::Math::FloatDifferenceULP( floatRepresentationfx1.asFloat, floatRepresentationfx2.asFloat ) != -1 )
   {
   std::cout << "Unexpected float distance." << std::endl;
   return EXIT_FAILURE;
   }
 if( itk::Math::FloatAlmostEqual( floatRepresentationfx1.asFloat, floatRepresentationfx2.asFloat ) )
   {
   std::cout << "floatRepresentationfx1 is almost equal to floatRepresentationfx2\n" << std::endl;
   }
 else
   {
   std::cout << "floatRepresentationfx1 is NOT almost equal to floatRepresentationfx2\n" << std::endl;
   return EXIT_FAILURE;
   }
 return testPassStatus;

} </source>

  • When an exception can be thrown by a class, write a condition (in a try/catch block) in the test that is able to reproduce the exception condition.
  • When appropriate, exercise a given method a number of times (with different arguments) so that the method is thoroughly tested.
  • When required, print messages to the standard output (or error output) so that any errors happened can be checked by a human reader.
  • Whenever possible, test against a baseline (ground truth): if an object is not tested against a baseline, we will not know whether it is working as expected, even if its code coverage is 100%.
  • Use the testing macros defined in itkTestingMacros.h. Particularly, make sure that the macro EXERCISE_BASIC_OBJECT_METHODS is called right after the creation of the object whose class is being tested.