AutoTest: Contract-based random testing tool

Overview

Assertions, as described in the Design by Contract(TM) software development methodology, contain the specification of the system. Thus, they provide an automated oracle for the testing process. Based on this insight, we are developing a tool for fully automatic testing of object-oriented code.

Properly testing medium to large-scale applications requires huge effort. Practice shows that resources are often under-allocated and, furthermore, testing tools are inadequate. Hence, the idea of automatic testing. If a developer could write his code in the morning, then, before leaving for lunch, press a button, and, upon returning from lunch, have a report saying "Your method m1 of class C1 contains a bug. Tests passed for all other methods of your class.", we think it would be safe to assume that released software products would be far less buggy. Testing would only require the "time and effort" of a machine, and, ideally, would itself be much less error-prone.

Basic concepts

The goal of our work is to fully automate the entire testing process. The difficulty lies in automating:

For both of these tasks, the knowledge of a human tester is usually required. However, when the specification of a system is part of its source code, an automated oracle becomes freely available. This idea is the basis of Design by Contract(TM) and thus our approach takes full advantage of existing contracts in Eiffel code. However, using contracts as an automated, already available oracle raises some problems:

Automatic generation of input data also presents some difficulties:

There are several further issues to consider when the testing process becomes fully automatic:

What we have achieved so far

2005 ~ 2007: Basic framework development

The main contribution of our work is a fully automatic testing process implemented in a tool called AutoTest. A great amount of related work exists on this topic. However, most approaches focus on automating only certain steps, not the entire process. Many approaches that claim to be completely automatic turn out to require manual intervention or leave so many cases uncovered, that they become unusable for practical purposes. Furthermore, one of the main advantages in our work is that we can test our approach on full fledged, industrial scale applications; we do not need to create toy examples (which might be biased too), as is the case for a vast field of related work that addresses Java code equipped with JML contracts.

We have used our tool to test production-quality libraries. Here are just a few examples representing some classes (taken from the EiffelBase 5.6 library) that we tested and how many distinct bugs we found in each of them, counting also bugs present in suppliers of the classes which do not allow the tested classes to work correctly:

2008: Adaptive Random Testing

This technique uses the notion of object distance. AutoTest tries to generate new test inputs which are far away from the inputs that have already seen.

2008.12: Integration into EiffelStudio

We integrated AutoTest into EiffelStudio since EiffelStudio become open-source in 2006. A couple of improvements also come along with the integration: the test generation speed is doubled; and more generic classes can be handled now.

2009.1: Branch Coverage Not Good as Test Stopping Criterion

We conducted some long testing experiments, and found out that branch coverage is not a good stopping criterion: over 50% of the faults are found in a period when branch coverage hardly increases.

2009.7: Precondition Satisfaction

We developed a technique, Precondition Satisfaction to help AutoTest generate test inputs which satisfy the precondition of the feature under test. With this technique, AutoTest is able to test 56% of the features which it could not test before, with negligible overhead.

2010.1: Test Case Serialization/Deserialization and Object-State Retrieval

We introduced test case serialization/deserialization in AutoTest. AutoTest cannot generate a separate test case with all needed input objects. Those test cases are very easy to debug, and it also come with object-state information, used for object state exploration.

2010.12: Agent Object Generation

With Gabriel Zacharias Walch, we made AutoTest capable of testing features with agent argument. From now on, features such as for_all, do_all can be tested easily.

2011.4: Stateful testing

We devised stateful testing, a new automated testing strategy which generates new test cases that improve over an existing test suite. Stateful testing generates test cases that violate the dynamically inferred contracts (invariants) characterizing the existing test suite, hence they are likely to detect new errors, and to ameliorate the accuracy of the inferred contracts by discovering those that are unsound. Experiments presented in the paper, on 13 data structure classes totalling over 28000 lines of code, demonstrate the efficacy of stateful testing in improving over the results of long sessions of random testing: stateful testing found 70% new errors and improved the accuracy of automatically inferred contracts to over 99%, with just a 6.5% time overhead.

Download experimental data (642MB)

Student projects

We are glad to collaborate with you in the testing area. We also provide master thesis, bachelor thesis, Software Engineering Laboratory projects. The projects are described at here.

Download

Since the end of 2008, AutoTest is integrated into the Eiffel Verification Environment (EVE) project, a research branch of EiffelStudio, the source code is available at here.

Project members

Former members

Funding

The following subjects have contributed to funding the AutoTest project:

Publications