Distributed testing sessions for AutoTest
Master Thesis at the University of Lorraine (France), March 2014 — September 2014
Author: Victorien Elvinger
Supervisor: Chris Poskitt, Alexey Kolesnichenko, and Max (Yu) Pei
AutoTest is an automatic testing framework for the Eiffel programming language. It exploits the presence of contracts — executable preconditions, postconditions, invariants — to automate the entire testing process, variously using them as built-in filters for test inputs and oracles for test execution. The framework has evolved over a number of years and PhD theses, has been integrated with a number of test data generation strategies, and its efficacy demonstrated by automatically unearthing previously undetected bugs in production-level libraries like EiffelBase.
Despite the merits of the system, a possible factor in limiting its effective use is the amount of time required to test an Eiffel program (AutoTest generates only approximately 20 test inputs per minute) — a factor, which if addressed, will move the system towards realising a long-term aim of automatically providing appropriate feedback while the programmer is programming.
A first step towards addressing the performance limitation is to explore the large-scale distribution facilitated by cloud computing; in particular, the possibility of multiple testing sessions executing in parallel across multiple computational resources. For this we would learn from and build upon the related work of other researchers exploiting the cloud in software testing problems.