AutoTest: Contract-based random testing tool
Overview
Assertions, as described in the Design by Contract(TM) software development methodology, contain the specification of the system. Thus, they provide an automated oracle for the testing process. Based on this insight, we are developing a tool for fully automatic testing of object-oriented code.
Properly testing medium to large-scale applications requires huge effort. Practice shows that resources are often under-allocated and, furthermore, testing tools are inadequate. Hence, the idea of automatic testing. If a developer could write his code in the morning, then, before leaving for lunch, press a button, and, upon returning from lunch, have a report saying "Your method m1 of class C1 contains a bug. Tests passed for all other methods of your class.", we think it would be safe to assume that released software products would be far less buggy. Testing would only require the "time and effort" of a machine, and, ideally, would itself be much less error-prone.
Basic concepts
The goal of our work is to fully automate the entire testing process. The difficulty lies in automating:
- Generation of input values
- The oracle
For both of these tasks, the knowledge of a human tester is usually required. However, when the specification of a system is part of its source code, an automated oracle becomes freely available. This idea is the basis of Design by Contract(TM) and thus our approach takes full advantage of existing contracts in Eiffel code. However, using contracts as an automated, already available oracle raises some problems:
- Contracts very rarely (if ever) contain the full specification of the software under test. To give only an example, the postcondition of a routine should state both what it changes and what it doesn't change (for instance, a print method should state that it changes the appearance of the screen by adding some text to it, but it does not increase the value of a counter, connect to a database, etc.). When dealing with systems that operate in open environments, such a task is impossible with the mechanisms currently offered by Design by Contract(TM) as it is implemented in Eiffel. This situation is known as the "frame problem".
- Contracts may not only be incomplete, but also incorrect: they might not represent the real specification of the software, because of mistakes made by developers when translating the natural language specification into programming language constructs, or because the requirements document itself is incorrect or incomplete.
Automatic generation of input data also presents some difficulties:
- The precondition of the routine under test must be fulfilled, otherwise the test case is invalid. Strong preconditions might never be satisfied (in a restricted time frame) by a random input value generator.
- Generated values, in addition to satisfying the precondition of the tested routine, must fulfill some test adequacy criteria (such as coverage of code or specification, etc).
There are several further issues to consider when the testing process becomes fully automatic:
- What degree of control can the user have over it? How can we parameterize it so that the user can adapt it to his needs?
- Automatic integration of test results
- Fault isolation
- Finding the minimal failure-reproducing examples
What we have achieved so far
2005 ~ 2007: Basic framework development
The main contribution of our work is a fully automatic testing process implemented in a tool called AutoTest. A great amount of related work exists on this topic. However, most approaches focus on automating only certain steps, not the entire process. Many approaches that claim to be completely automatic turn out to require manual intervention or leave so many cases uncovered, that they become unusable for practical purposes. Furthermore, one of the main advantages in our work is that we can test our approach on full fledged, industrial scale applications; we do not need to create toy examples (which might be biased too), as is the case for a vast field of related work that addresses Java code equipped with JML contracts.
We have used our tool to test production-quality libraries. Here are just a few examples representing some classes (taken from the EiffelBase 5.6 library) that we tested and how many distinct bugs we found in each of them, counting also bugs present in suppliers of the classes which do not allow the tested classes to work correctly:
- STRING - 12
- BOUNDED_STACK - 11
- HASH_TABLE - 23
- ARRAYED_SET - 27
- LINKED_LIST - 25
2008: Adaptive Random Testing
This technique uses the notion of object distance. AutoTest tries to generate new test inputs which are far away from the inputs that have already seen.
2008.12: Integration into EiffelStudio
We integrated AutoTest into EiffelStudio since EiffelStudio become open-source in 2006. A couple of improvements also come along with the integration: the test generation speed is doubled; and more generic classes can be handled now.
2009.1: Branch Coverage Not Good as Test Stopping Criterion
We conducted some long testing experiments, and found out that branch coverage is not a good stopping criterion: over 50% of the faults are found in a period when branch coverage hardly increases.
2009.7: Precondition Satisfaction
We developed a technique, Precondition Satisfaction to help AutoTest generate test inputs which satisfy the precondition of the feature under test. With this technique, AutoTest is able to test 56% of the features which it could not test before, with negligible overhead.
2010.1: Test Case Serialization/Deserialization and Object-State Retrieval
We introduced test case serialization/deserialization in AutoTest. AutoTest cannot generate a separate test case with all needed input objects. Those test cases are very easy to debug, and it also come with object-state information, used for object state exploration.
2010.12: Agent Object Generation
With Gabriel Zacharias Walch, we made AutoTest capable of testing features with agent argument. From now on, features such as for_all, do_all can be tested easily.
2011.4: Stateful testing
We devised stateful testing, a new automated testing strategy which
generates new test cases that improve over an existing test suite.
Stateful testing generates test cases that violate the dynamically
inferred contracts (invariants) characterizing the existing test suite,
hence they are likely to detect new errors, and to ameliorate the
accuracy of the inferred contracts by discovering those that are
unsound. Experiments presented in the paper, on 13 data structure
classes totalling over 28000 lines of code, demonstrate the efficacy of
stateful testing in improving over the results of long sessions of
random testing: stateful testing found 70% new errors and improved the
accuracy of automatically inferred contracts to over 99%, with just a
6.5% time overhead.
Student projects
We are glad to collaborate with you in the testing area. We also provide master thesis, bachelor thesis, Software Engineering Laboratory projects. The projects are described at here.
Download
Since the end of 2008, AutoTest is integrated into the Eiffel Verification Environment (EVE) project, a research branch of EiffelStudio, the source code is available at here.
Project members
Former members
Funding
The following subjects have contributed to funding the AutoTest project:- Gebert Rüf Stiftung: ManCom project
- SNF (Swiss National Science Foundation):
- Scientific Assessment of Testing Strategies (SATS) project
- SNF (Swiss National Science Foundation):
- Large Scale Automatic Testing (LSAT) project
Publications
- Wei, Y., Gebhardt, S., Meyer, B., Oriol, M.,"Satisfying Test Preconditions through Guided Object Selection", Third International Conference on Software Testing, Verification and Validation (ICST'10), April, 2010
- Bertrand Meyer and Arno Fiva and Ilinca Ciupa and Andreas Leitner and Yi Wei and Emmanuel Stapf, "Program that Test Themselves", IEEE Software 2010
- Ciupa, I., Leitner, A., Oriol, M., Meyer, B., "ARTOO: Adaptive Random Testing for Object-Oriented Software", Proceedings of ICSE'08: International Conference on Software Engineering 2008, (Leipzig, Germany), May 2008
- Ciupa, I., Pretschner, A., Leitner, A., Oriol, M., Meyer, B., "On the Predictability of Random Tests for Object-Oriented Software", Proceedings of ICST'08: IEEE International Conference on Software Testing, Verification and Validation 2008, (Lillehammer, Norway), April 2008
- Ciupa, I., Leitner, A., Oriol, M., Meyer, B., "Experimental Assessment of Random Testing for Object-Oriented Software", Proceedings of ISSTA'07: International Symposium on Software Testing and Analysis 2007, (London, UK), July 2007
- Leitner, A., Eugster, P., Oriol, M., Ciupa, I., "Reflecting on an Existing Programming Language", Proceedings of TOOLS Europe 2007 - Objects, Models, Components, Patterns, (Zurich, Switzerland), June 2007
- Meyer, B., Ciupa, I., Leitner, A., Liu, L., "Automatic Testing of Object-Oriented Software", in Proceedings of SOFSEM 2007, Current Trends in Theory and Practice of Computer Science, (Harrachov, Czech Republic), January, 2007 (also published as ETH Technical Report 538 11/2006)
- Leitner, A., Ciupa, I., Meyer, B., Howard, M., "Reconciling Manual and Automated Testing: the AutoTest Experience", in Proceedings of the 40th Hawaii International Conference on System Sciences (Waikoloa, Big Island, Hawaii), January, 2007
- Ciupa, I., Leitner, A., "Automatic Testing Based on Design by Contract", in Proceedings of Net.ObjectDays 2005: 6th Annual International Conference on Object-Oriented and Internet-based Technologies, Concepts, and Applications for a Networked World (Erfurt, Germany) , September, 2005.
- Paper presented in the Developer Session of the International Workshop on Software Quality (SOQUA 2005)
- (The draft paper describing a tool called Test Wizard from which AutoTest inherits some of its features) Karine Arnout, Xavier Rousselot, and Bertrand Meyer, "Test Wizard: Automatic test case generation based on Design by Contract(TM)".