se logo Chair of Software Engineering
eth logo
   

 

Master Thesis

Type of project:
Master Thesis March 2009 - September 2009

Author:
Serge Gebhardt

Title:
Satisfying Test Preconditions through Guided Object Selection

Supervising Assistant:
Yi Wei

Description

A random testing strategy for object-oriented software basically constructs test cases by performing the following two tasks: 1) randomly select a method under test (MUT); 2) randomly select or construct objects to feed to the chosen method as either target or arguments. Usually, all the objects that are created for or returned by a MUT are stored in an object pool so they can be reused for future test cases. When working with OO software equipped with contracts, it becomes difficult for a random testing strategy to select objects that satisfy the precondition of the MUT. As a result some methods are never tested because all generated test cases fail to satisfy their preconditions.

An evaluation of the object pool showed that the traditional strategy often misses object combinations that do satisfy the MUT’s preconditions. Therefore we keep track of these object combinations during the testing process, and directly select them for MUTs. We call this the guided object selection strategy.

We implemented the idea in our testing tool AutoTest for Eiffel. We introduced a predicate pool to keep track of object combinations satisfying preconditions of a certain method. All preconditions appearing in the classes under test are collected into a predicate pool. After each test case run, these predicates are evaluated against the objects that are used in that test case. Object combinations satisfying a given predicate are marked in the pool and associated with that predicate. Later, when a method is to be tested, objects satisfying that method’s precondition predicates (as shown by the predicate pool) can be directly selected. This is called guided object selection.

Experimentally, the resulting strategy succeeds in testing 56% of the routines that the pure random strategy missed; it tests hard routines 3.6 times more often; although it misses some of the faults detected by the original strategy, it finds 9.5% more faults overall; and it causes no noticeable overhead.

report