Publications by Carlo A. Furia

Books and edited proceedings

Research papers

Position papers, invited presentations, reviews, and tutorials

Theses

Technical reports


Details

Mohammad Rezaalipour and Carlo A. Furia.
An Empirical Study of Fault Localization in Python Programs.
Empirical Software Engineering, XXX:XXX–XXX.
Springer, XXXly 2024.

Abstract
Despite its massive popularity as a programming language, especially in novel domains like data science programs, there is comparatively little research about fault localization that targets Python. Even though it is plausible that several findings about programming languages like C/C++ and Java — the most common choices for fault localization research — carry over to other languages, whether the dynamic nature of Python and how the language is used in practice affect the capabilities of classic fault localization approaches remain open questions to investigate.
This paper is the first multi-family large-scale empirical study of fault localization on real-world Python programs and faults. Using Zou et al.'s recent large-scale empirical study of fault localization in Java as the basis of our study, we investigated the effectiveness (i.e., localization accuracy), efficiency (i.e., runtime performance), and other features (e.g., different entity granularities) of seven well-known fault-localization techniques in four families (spectrum-based, mutation-based, predicate switching, and stack-trace based) on 135 faults from 13 open-source Python projects from the BugsInPy curated collection.
The results replicate for Python several results known about Java, and shed light on whether Python's peculiarities affect the capabilities of fault localization. The replication package that accompanies this paper includes detailed data about our experiments, as well as the tool FauxPy that we implemented to conduct the study.

@Article{RF-EMSE24-FauxPy,
  author = {Mohammad Rezaalipour and Carlo A. Furia},
  title = {An Empirical Study of Fault Localization in {P}ython Programs},
  journal = {Empirical Software Engineering},
  year = {2024},
  OPTvolume = {XXX},
  OPTpages = {XXX--XXX},
  OPTmonth = {},
  publisher = {Springer},
  note = {Accepted in March 2024.}
}

Riccardo Felici, Laura Pozzi, and Carlo A. Furia.
HyperPUT: Generating Synthetic Faulty Programs to Challenge Bug-Finding Tools.
Empirical Software Engineering, XXX:XXX–XXX.
Springer, XXXly 2023.

Abstract
As research in automatically detecting bugs grows and produces new techniques, having suitable collections of programs with known bugs becomes crucial to reliably and meaningfully compare the effectiveness of these techniques. Most of the existing approaches rely on benchmarks collecting manually curated real-world bugs, or synthetic bugs seeded into real-world programs. Using real-world programs entails that extending the existing benchmarks or creating new ones remains a complex time-consuming task.
In this paper, we propose a complementary approach that automatically generates programs with seeded bugs. Our technique, called HyperPut, builds C programs from a "seed" bug by incrementally applying program transformations (introducing programming constructs such as conditionals, loops, etc.) until a program of the desired size is generated. In our experimental evaluation, we demonstrate how HyperPut can generate buggy programs that can challenge in different ways the capabilities of modern bug-finding tools, and some of whose characteristics are comparable to those of bugs in existing benchmarks. These results suggest that HyperPut can be a useful tool to support further research in bug-finding techniques — in particular their empirical evaluation.

@Article{FPF-EMSE23-HyperPut,
  author = {Riccardo Felici and Laura Pozzi and Carlo A. Furia},
  title = {{HyperPUT}: Generating Synthetic Faulty Programs to Challenge Bug-Finding Tools},
  journal = {Empirical Software Engineering},
  year = {2023},
  OPTvolume = {XXX},
  OPTpages = {XXX--XXX},
  OPTmonth = {},
  publisher = {Springer},
  note = {Accepted in November 2023.}
}

Diego Marcilio and Carlo A. Furia.
Lightweight Precise Automatic Extraction of Exception Preconditions in Java Methods.
Empirical Software Engineering, XXX:XXX–XXX.
Springer, XXXly 2023.

Abstract
When a method throws an exception — its exception precondition — is a crucial element of the method's documentation that clients should know to properly use it. Unfortunately, exceptional behavior is often poorly documented, and sensitive to changes in a project's implementation details that can be onerous to keep synchronized with the documentation.
We present Wit, an automated technique that extracts the exception preconditions of Java methods and constructors. Wit uses static analysis to analyze the paths in a method's implementation that lead to throwing an exception. Wit's analysis is precise, in that it only reports exception preconditions that are correct and correspond to feasible exceptional behavior. It is also lightweight: it only needs the source code of the class (or classes) to be analyzed — without building or running the whole project. To this end, its design uses heuristics that give up some completeness (Wit cannot infer all exception preconditions) in exchange for precision and ease of applicability.
We ran Wit on the JDK and 46 Java projects, where it discovered 30'487 exception preconditions in 24'461 methods, taking less than two seconds per analyzed public method on average. A manual analysis of a significant sample of these exception preconditions confirmed that Wit is 100% precise, and demonstrated that it can accurately and automatically document the exceptional behavior of Java methods.

  • The paper (updated, improved, and extended text) and its DOI.
  • This paper extends the ICSME paper with a similar title.
@Article{MF-EMSE23-Wit,
  author = {Diego Marcilio and Carlo A. Furia},
  title = {Lightweight Precise Automatic Extraction of Exception Preconditions in {J}ava Methods},
  journal = {Empirical Software Engineering},
  year = {2023},
  OPTvolume = {XXX},
  OPTpages = {XXX--XXX},
  OPTmonth = {},
  publisher = {Springer},
  note = {Accepted in September 2023.}
}

Marco Paganoni and Carlo A. Furia.
Reasoning About Exceptional Behavior At the Level of Java Bytecode.
In Proceedings of the 18th International Conference on integrated Formal Methods (iFM).
Lecture Notes in Computer Science, 14300:113–133, Springer, November 2023.

Abstract
A program's exceptional behavior can substantially complicate its control flow, and hence accurately reasoning about the program's correctness. On the other hand, formally verifying realistic programs is likely to involve exceptions — a ubiquitous feature in modern programming languages.
In this paper, we present a novel approach to verify the exceptional behavior of Java programs, which extends our previous work on ByteBack. ByteBack works on a program's bytecode, while providing means to specify the intended behavior at the source-code level; this approach sets ByteBack apart from most state-of-the-art verifiers that target source code. To explicitly model a program's exceptional behavior in a way that is amenable to formal reasoning, we introduce Vimp: a high-level bytecode representation that extends the Soot framework's Grimp with verification-oriented features, thus serving as an intermediate layer between bytecode and the Boogie intermediate verification language. Working on bytecode through this intermediate layer brings flexibility and adaptability to new language versions and variants: as our experiments demonstrate, ByteBack can verify programs involving exceptional behavior in all versions of Java, as well as in Scala and Kotlin (two other popular JVM languages).

  • The paper in PDF format (updated and improved text) and its DOI.
  • This paper won the best paper award at iFM 2023.
@InProceedings{PF-IFM23-BBExceptions,
  author = {Marco Paganoni and Carlo A. Furia},
  title = {Reasoning About Exceptional Behavior At the Level of {J}ava Bytecode},
  booktitle = {Proceedings of the 18th International Conference on integrated Formal Methods (iFM)},
  year = {2023},
  editor = {Paula Herber and Anton Wijs and Marcello M. Bonsangue},
  series = {Lecture Notes in Computer Science},
  pages = {113--133},
  volume = {14300},
  month = {November},
  publisher = {Springer},
  acceptancerate = {33\%}
}

Diego Marcilio and Carlo A. Furia.
Towards Code Improvements Suggestions from Client Exception Analysis.
In Proceedings of the 39th IEEE International Conference on Software Maintenance and Evolution (ICSME — NIER track).
Pgg. 363–368, IEEE Computer Society, October 2023.

Abstract
Modern software development heavily relies on reusing third-party libraries; this makes developers more productive, but may also lead to misuses or other kinds of design issues. In this paper, we focus on the exceptional behavior of library methods, and propose to detect client code that may trigger such exceptional behavior. As we demonstrate on several examples of open-source projects, exceptional behavior in clients often naturally suggests improvements to the documentation, tests, runtime checks, and annotations of the clients.
In order to automatically detect client calls that may trigger exceptional behavior in library methods, we show how to repurpose existing techniques to extract a method's exception precondition — the condition under which the method throws an exception. To demonstrate the feasibility of our approach, we applied it to 1,523 open-source Java projects, where it found 4,115 cases of calls to library methods that may result in an exception. We manually analyzed 100 of these cases, confirming that the approach is capable of uncovering several interesting opportunities for code improvements.

  • The paper (updated, improved, and extended text) and its DOI.
@InProceedings{MF-ICSME23-nier-pothrows,
  author = {Diego Marcilio and Carlo A. Furia},
  title = {Towards Code Improvements Suggestions from Client Exception Analysis},
  booktitle = {Proceedings of the 39th IEEE International Conference 
               on Software Maintenance and Evolution (ICSME)},
  pages = {363--368},
  year = {2023},
  month = {October},
  publisher = {IEEE Computer Society},
  acceptancerate = {39\%},
  note = {New ideas and emerging results track}
}

Mohammad Rezaalipour and Carlo A. Furia.
aNNoTest: An Annotation-based Test Generation Tool for Neural Network Programs.
In Proceedings of the 39th IEEE International Conference on Software Maintenance and Evolution (ICSME — tool demo track).
Pgg. 574–579, IEEE Computer Society, October 2023.

Abstract
Even though neural network (NN) programs are often written in Python, using general-purpose test-generation tools for Python to test them is likely to be ineffective, as these tools do not support the particular input constraints that NN programs often require. To address this challenge, we present aNNoTest: an automated unit-test generation tool for NN programs written in Python. aNNoTest offers a simple annotation language that is suitable to concisely express the usual input constraints of NN programs; it then uses these annotations to precisely generate valid inputs that are capable of revealing bugs. This short paper describes how aNNoTest works in practice, and reports some experiments that demonstrate its effectiveness as a bug-finding tool for NN programs. aNNoTest is available as open source.

  • The paper (updated, improved, and extended text) and its DOI.
  • The tool aNNoTest described in the paper is available (specifically, release v0.1).
@InProceedings{RF-ICSME23-tool-annotest,
  author = {Mohammad Rezaalipour and Carlo A. Furia},
  title = {{aNNoTest}: An Annotation-based Test Generation Tool for Neural Network Programs},
  booktitle = {Proceedings of the 39th IEEE International Conference 
               on Software Maintenance and Evolution (ICSME)},
  pages = {574--579},
  year = {2023},
  month = {October},
  publisher = {IEEE Computer Society},
  acceptancerate = {61\%},
  note = {Tool demo track}
}

Carlo A. Furia, Richard Torkar, and Robert Feldt.
Towards Causal Analysis of Empirical Software Engineering Data:
The Impact of Programming Languages on Coding Competitions
ACM Transactions on Software Engineering and Methodology, 33(1):13:1–35.
ACM, November 2023.

Abstract
There is abundant observational data in the software engineering domain, whereas running large-scale controlled experiments is often practically impossible. Thus, most empirical studies can only report statistical correlations — instead of potentially more insightful and robust causal relations.
To support analyzing purely observational data for causal relations, and to assess any differences between purely predictive and causal models of the same data, this paper discusses some novel techniques based on structural causal models (such as directed acyclic graphs of causal Bayesian networks). Using these techniques, one can rigorously express, and partially validate, causal hypotheses; and then use the causal information to guide the construction of a statistical model that captures genuine causal relations — such that correlation does imply causation.
We apply these ideas to analyzing public data about programmer performance in Code Jam, a large world-wide coding contest organized by Google every year. Specifically, we look at the impact of different programming languages on a participant's performance in the contest. While the overall effect associated with programming languages is weak compared to other variables — regardless of whether we consider correlational or causal links — we found considerable differences between a purely associational and a causal analysis of the very same data.
The takeaway message is that even an imperfect causal analysis of observational data can help answer the salient research questions more precisely and more robustly than with just purely predictive techniques — where genuine causal effects may be confounded.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • Also available as an arXiv pre-print.
@Article{FTF-TOSEM23-causality-CodeJam,
  author =  {Carlo A. Furia and Richard Torkar and Robert Feldt},
  title =   {Towards Causal Analysis of Empirical Software Engineering Data: The Impact of Programming Languages on Coding Competitions},
  journal = {ACM Transactions on Software Engineering and Methodology},
  volume =  {33},
  number =  {1},
  pages =   {13:1--35},
  month =   {November},
  year =    {2023},
  note = {Accepted in July 2023.}
}

Mohammad Rezaalipour and Carlo A. Furia.
An Annotation-based Approach for Finding Bugs in Neural Network Programs.
Journal of Systems and Software, 201:111669.
Elsevier, July 2023.

Abstract
As neural networks are increasingly included as core components of safety-critical systems, developing effective testing techniques specialized for them becomes crucial. The bulk of the research has focused on testing neural-network models; but these models are defined by writing programs, and there is growing evidence that these neural-network programs often have bugs too.
This paper presents aNNoTest: an approach to generating test inputs for neural-network programs. A fundamental challenge is that the dynamically-typed languages (e.g., Python) commonly used to program neural networks cannot express detailed constraints about valid function inputs (e.g., matrices with certain dimensions). Without knowing these constraints, automated test-case generation is prone to producing invalid inputs, which trigger spurious failures and are useless for identifying real bugs. To address this problem, we introduce a simple annotation language tailored for concisely expressing valid function inputs in neural-network programs. aNNoTest takes as input an annotated program, and uses property-based testing to generate random inputs that satisfy the validity constraints. In the paper, we also outline guidelines that simplify writing aNNoTest annotations.
We evaluated aNNoTest on 19 neural-network programs from Islam et al's survey, which we manually annotated following our guidelines — producing 6 annotations per tested function on average. aNNoTest automatically generated test inputs that revealed 94 bugs, including 63 bugs that the survey reported for these projects. These results suggest that aNNoTest can be a valuable approach to finding widespread bugs in real-world neural-network programs.

@Article{RF-JSS23-aNNoTest,
  author = {Mohammad Rezaalipour and Carlo A. Furia},
  title = {An Annotation-based Approach for Finding Bugs in Neural Network Programs},
  journal = {Journal of Systems and Software},
  year = {2023},
  volume = {201},
  pages = {111669},
  month = {July},
  publisher = {Elsevier},
  note = {Online since March 2023.}
}

Marco Paganoni and Carlo A. Furia.
Verifying Functional Correctness Properties At the Level of Java Bytecode.
In Proceedings of the 25th International Symposium on Formal Methods (FM).
Lecture Notes in Computer Science, 14000:343–363, Springer, March 2023.

Abstract
The breakneck evolution of modern programming languages aggravates the development of deductive verification tools, which struggle to timely and fully support all new language features. To address this challenge, we present ByteBack: a verification technique that works on Java bytecode. Compared to high-level languages, intermediate representations such as bytecode offer a much more limited and stable set of features; hence, they may help decouple the verification process from changes in the source-level language.
ByteBack offers a library to specify functional correctness properties at the level of the source code, so that the bytecode is only used as an intermediate representation that the end user does not need to work with. Then, ByteBack reconstructs some of the information about types and expressions that is erased during compilation into bytecode but is necessary to correctly perform verification. Our experiments with an implementation of ByteBack demonstrate that it can successfully verify bytecode compiled from different versions of Java, and including several modern language features that even state-of-the-art Java verifiers (such as KeY and OpenJML) do not directly support — thus revealing how ByteBack's approach can help keep up verification technology with language evolution.

  • The paper in PDF format (updated and improved text) and its DOI.
  • The tool ByteBack described in the paper is available in the replication package.
@InProceedings{PF-FM23-ByteBack,
  author = {Marco Paganoni and Carlo A. Furia},
  title = {Verifying Functional Correctness Properties At the Level of {J}ava Bytecode},
  booktitle = {Proceedings of the 25th International Symposium on Formal Methods (FM)},
  year = {2023},
  editor = {Marsha Chechik and Joost-Pieter Katoen and Martin Leucker},
  series = {Lecture Notes in Computer Science},
  pages = {343--363},
  volume = {14000},
  month = {March},
  publisher = {Springer},
  acceptancerate = {29\%}
}

Bhargav Nagaraja Bhatt and Carlo A. Furia.
Automated Repair of Resource Leaks in Android Applications.
Journal of Systems and Software, 192:111417.
Elsevier, July 2022.

Abstract
Resource leaks — a program does not release resources it previously acquired — are a common kind of bug in Android applications. Even with the help of existing techniques to automatically detect leaks, writing a leak-free program remains tricky. One of the reasons is Android's event-driven programming model, which complicates the understanding of an application's overall control flow.
In this paper, we present PlumbDroid: a technique to automatically detect and fix resource leaks in Android applications. PlumbDroid builds a succinct abstraction of an app's control flow, and uses it to find execution traces that may leak a resource. The information built during detection also enables automatically building a fix — consisting of release operations performed at appropriate locations — that removes the leak and does not otherwise affect the application's usage of the resource.
An empirical evaluation on resource leaks from the DroidLeaks curated collection demonstrates that PlumbDroid's approach is scalable, precise, and produces correct fixes for a variety of resource leak bugs: PlumbDroid automatically found and repaired 50 leaks that affect 9 widely used resources of the Android system, including all those collected by DroidLeaks for those resources; on average, it took just 2 minutes to detect and repair a leak. PlumbDroid also compares favorably to Relda2/RelFix — the only other fully automated approach to repair Android resource leaks — since it can detect leaks with higher precision and producing smaller fixes. These results indicate that PlumbDroid can provide valuable support to enhance the quality of Android applications in practice.

@Article{BF-JSS22-PlumbDroid,
  author = {Bhargav Nagaraja Bhatt and Carlo A. Furia},
  title = {Automated Repair of Resource Leaks in {A}ndroid Applications},
  journal = {Journal of Systems and Software},
  year = {2022},
  volume = {192},
  pages = {111417},
  month = {October},
  publisher = {Elsevier},
  note = {Online since June 2022.}
}

Diego Marcilio and Carlo A. Furia.
What Is Thrown? Lightweight Precise Automatic Extraction of Exception Preconditions in Java Methods.
In Proceedings of the 38th IEEE International Conference on Software Maintenance and Evolution (ICSME).
Pgg. 340–351, IEEE Computer Society, October 2022.

Abstract
When a method throws an exception — its exception precondition — is a crucial element of the method's documentation that clients should know to properly use it. Unfortunately, exceptional behavior is often poorly documented, and sensitive to changes in a project's implementation details that can be onerous to keep synchronized with the documentation.
We present Wit, an automated technique that extracts the exception preconditions of Java methods. Wit uses static analysis to analyze the paths in a method's implementation that lead to throwing an exception. Wit's analysis is precise, in that it only reports exception preconditions that are correct and correspond to feasible exceptional behavior. It is also lightweight: it only needs the source code of the class (or classes) to be analyzed — without building or running the whole project. To this end, its design uses heuristics that give up some completeness (Wit cannot infer all exception preconditions) in exchange for precision and ease of applicability.
We ran Wit on 46 Java projects, where it discovered 11875 exception preconditions in 10234 methods, taking just 1 second per method on average. A manual analysis of a significant sample of these exception preconditions confirmed that Wit is 100% precise, and demonstrated that it can accurately and automatically document the exceptional behavior of Java methods.

  • The paper (updated, improved, and extended text) and its DOI.
  • This paper won the distinguished paper award at ICSME 2022.
@InProceedings{MF-ICSME22-Wit,
  author = {Diego Marcilio and Carlo A. Furia},
  title = {What Is Thrown? {L}ightweight Precise 
           Automatic Extraction of Exception Preconditions in {J}ava Methods},
  booktitle = {Proceedings of the 38th IEEE International Conference 
               on Software Maintenance and Evolution (ICSME)},
  pages = {340--351},
  year = {2022},
  month = {October},
  publisher = {IEEE Computer Society},
  acceptancerate = {23\%},
  note = {Distinguished paper award}
}

Liushan Chen, Yu Pei, Tian Zhang, Minxue Pan, Qixin Wang, and Carlo A. Furia.
Program Repair with Repeated Learning.
IEEE Transactions on Software Engineering, 49(2):831–848.
IEEE Computer Society, February 2023.

Abstract
A key challenge in generate-and-validate automated program repair is directing the search for fixes so that it can efficiently find those that are more likely to be correct. To this end, several techniques use machine learning to capture the features of programmer-written fixes. In existing approaches, fitting the model typically takes place before fix generation and is independent of it: the fix generation process uses the learned model as one of its inputs. However, the intermediate outcomes of an ongoing fix generation process often provide valuable information about which candidate fixes were "better"; this information could profitably be used to retrain the model, so that each new iteration of the fixing process would also learn from the outcome of previous ones.
In this paper, we propose the Liana technique for automated program repair, which is based on this idea of repeatedly learning the features of generated fixes. To this end, Liana uses a fine-grained model that combines information about fix characteristics, their relations to the fixing context, and the results of test execution. The model is initially trained offline, and then repeatedly updated online as the fix generation process unravels; at any step, the most up-to-date model is used to guide the search for fixes — prioritizing those that are more likely to include the right ingredients. In an experimental evaluation on 732 real-world Java bugs from 3 popular benchmarks, Liana built correct fixes for 134 faults (83 ranked as first in its output) — improving over several other generate-and-validate program repair tools according to various measures.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
@Article{CPPZWF-TSE22-Liana,
  author =  {Liushan Chen and Yu Pei and Minxue Pan and Tian Zhang and Qixin Wang and Carlo A. Furia},
  title =   {Program Repair with Repeated Learning},
  journal = {IEEE Transactions on Software Engineering},
  volume =  {49},
  number =  {2},
  pages =   {831--848},
  month =   {February},
  year =    {2023},
  note = {Online since April 2022.}
}

Martin Odermatt, Diego Marcilio, and Carlo A. Furia.
Static Analysis Warnings and Automatic Fixing: A Replication for C# Projects.
In Proceedings of the 29th International Conference on Software Analysis, Evolution and Reengineering (SANER) — Reproducibility Studies and Negative Results track (RENE).
Pgg. 805–816, IEEE Computer Society, March 2022.

Abstract
Static analyzers have become increasingly popular both as developer tools and as subjects of empirical studies. Whereas static analysis tools exist for disparate programming languages, the bulk of the empirical research has focused on the popular Java programming language.
In this paper, we investigate to what extent some known results about using static analyzers for Java change when considering C# — another popular object-oriented language. To this end, we combine two replications of previous Java studies. First, we study which static analysis tools are most widely used among C# developers, and which warnings are more commonly reported by these tools on open-source C# projects. Second, we develop and empirically evaluate EagleRepair: a technique to automatically fix code in response to static analysis warnings; this is a replication of our previous work for Java.
Our replication indicates, among other things, that static code analysis is fairly popular among C# developers too; ReSharper is the most widely used static analyzer for C#; several static analysis rules are commonly violated in both Java and C# projects; automatically generating fixes to static code analysis warnings with good precision is feasible in C#. The EagleRepair tool developed for this research is available as open source.

  • The paper (updated, improved, and extended text) and its DOI.
  • This work extends Martin Odermatt's master thesis.
  • The tool EagleRepair described in the paper and the data of the experimental evaluation are available.
@InProceedings{OMF-SANER22,
  author = {Martin Odermatt and Diego Marcilio and Carlo A. Furia},
  title = {Static Analysis Warnings and Automatic Fixing: {A} Replication for {C\#} Projects},
  booktitle = {Proceedings of the 29th International Conference
               on Software Analysis, Evolution and Reengineering (SANER)},
  pages = {805--816},
  year = {2022},
  month = {March},
  publisher = {IEEE},
  note = {RENE (Reproducibility Studies and Negative Results) track},
  acceptancerate = {43\%}
}

Carlo A. Furia, Richard Torkar, and Robert Feldt.
Applying Bayesian Analysis Guidelines to Empirical Software Engineering Data:
The Case of Programming Languages and Code Quality
ACM Transactions on Software Engineering and Methodology, 31(3):40:1–38.
ACM, July 2022.

Abstract
Statistical analysis is the tool of choice to turn data into information, and then information into empirical knowledge. The process that goes from data to knowledge is, however, long, uncertain, and riddled with pitfalls. To be valid, it should be supported by detailed, rigorous guidelines, which help ferret out issues with the data or model, and lead to qualified results that strike a reasonable balance between generality and practical relevance. Such guidelines are being developed by statisticians to support the latest techniques for Bayesian data analysis. In this article, we frame these guidelines in a way that is apt to empirical research in software engineering.
To demonstrate the guidelines in practice, we apply them to reanalyze a GitHub dataset about code quality in different programming languages. The dataset's original analysis (Ray et al., 2014) and a critical reanalysis (Berger et al., 2019) have attracted considerable attention &emdash; in no small part because they target a topic (the impact of different programming languages) on which strong opinions abound. The goals of our reanalysis are largely orthogonal to this previous work, as we are concerned with demonstrating, on data in an interesting domain, how to build a principled Bayesian data analysis and to showcase its benefits. In the process, we will also shed light on some critical aspects of the analyzed data and of the relationship between programming languages and code quality &emdash; such as the impact of project-specific characteristics other than the used programming language.
The high-level conclusions of our exercise will be that Bayesian statistical techniques can be applied to analyze software engineering data in a way that is principled, flexible, and leads to convincing results that inform the state of the art while highlighting the boundaries of its validity. The guidelines can support building solid statistical analyses and connecting their results, and hence help buttress continued progress in empirical software engineering research.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • Also available as an arXiv pre-print.
@Article{FTF-TOSEM21-Bayes-guidelines,
  author =  {Carlo A. Furia and Richard Torkar and Robert Feldt},
  title =   {Applying {B}ayesian Analysis Guidelines to Empirical Software Engineering Data: The Case of Programming Languages and Code Quality},
  journal = {ACM Transactions on Software Engineering and Methodology},
  volume =  {31},
  number =  {3},
  pages =   {40:1--40:38},
  month =   {July},
  year =    {2022},
  note = {Accepted in October 2021.}
}

Claire Dross, Carlo A. Furia, Marieke Huisman, Rosemary Monahan, and Peter Müller.
VerifyThis 2019: A Program Verification Competition.
International Journal on Software Tools for Technology Transfer, 23:883–893.
Springer, May 2021.

Abstract
VerifyThis is a series of program verification competitions that emphasize the human aspect: participants tackle the verification of detailed behavioral properties — something that lies beyond the capabilities of fully automatic verification, and requires instead human expertise to suitably encode programs, specifications, and invariants. This paper describes the 8th edition of VerifyThis, which took place at ETAPS 2019 in Prague. Thirteen teams entered the competition, which consisted of three verification challenges and spanned two days of work. This report analyzes how the participating teams fared on these challenges, reflects on what makes a verification challenge more or less suitable for the typical VerifyThis participants, and outlines the difficulties of comparing the work of teams using wildly different verification approaches in a competition focused on the human aspect.

  • The paper in PDF format is available at this DOI.
  • A longer version of the paper is available as preprint arXiv:2008.13610.
@Article{DFHMM-STTT21-VerifyThis2019,
  author = {Claire Dross and Carlo A. Furia and Marieke Huisman and Rosemary Monahan and Peter M{\"u}ller},
  title = {{VerifyThis} 2019: A Program Verification Competition},
  journal = {International Journal on Software Tools for Technology Transfer},
  year = {2021},
  volume = {23},
  pages = {883--893},
  month = {May},
  publisher = {Springer},
  note = {Special issue with invited papers from ETAPS 2019's TOOLympics.}
}

Diego Marcilio and Carlo A. Furia.
How Java Programmers Test Exceptional Behavior.
In Proceedings of the 18th Mining Software Repositories Conference (MSR).
Pgg. 207–218, IEEE, May 2021.

Abstract
Exceptions often signal faulty or undesired behavior; hence, high-quality test suites should also target exceptional behavior. This paper is a large-scale study of exceptional tests — which exercise exceptional behavior — in 1157 open-source Java projects hosted on GitHub. We analyzed JUnit exceptional tests to understand what kinds of exceptions are more frequently tested, what coding patterns are used, and how features of a project, such as its size and number of contributors, correlate to the characteristics of its exceptional tests. We found that exceptional tests are only 13% of all tests, but tend to be larger than other tests on average; unchecked exceptions are tested twice as frequently as checked ones; 42% of all exceptional tests use try/catch blocks and usually are larger than those using other idioms; and bigger projects with more contributors tend to have more exceptional tests written using different styles. The paper also zeroes in on several detailed examples involving some of the largest analyzed projects, which refine the empirical results with qualitative evidence. The study's findings, and the capabilities of the tool we developed to analyze exceptional tests, suggest several implications for the practice of software development and for follow-up empirical studies.

  • The paper (updated, improved, and extended text) and its DOI.
  • A replication package — including the tool JUnitScrambler described in the paper, the analyzed dataset, and all scripts — is available.
@InProceedings{MF-MSR21,
  author = {Diego Marcilio and Carlo A. Furia},
  title = {How {J}ava Programmers Test Exceptional Behavior},
  booktitle = {Proceedings of the 18th Mining Software Repositories Conference (MSR)},
  pages = {207--218},
  year = {2021},
  month = {May},
  publisher = {IEEE},
  acceptancerate = {34\%}
}

Richard Torkar, Carlo A. Furia, Robert Feldt, Francisco Gomes de Oliveira Neto, Lucas Gren, Per Lenberg, and Neil A. Ernst.
A Method to Assess and Argue for Practical Significance in Software Engineering.
IEEE Transactions on Software Engineering, 48(6):2053–2065.
IEEE Computer Society, June 2022.

Abstract
A key goal of empirical research in software engineering is to assess practical significance, which answers the question of whether the observed effects of some compared treatments show a relevant difference in practice in realistic scenarios. Even though plenty of standard techniques exist to assess statistical significance, connecting it to practical significance is not straightforward or routinely done; indeed, only a few empirical studies in software engineering assess practical significance in a principled and systematic way.
In this paper, we argue that Bayesian data analysis provides suitable tools to assess practical significance rigorously. We demonstrate our claims in a case study comparing different test techniques. The case study's data was previously analyzed (Afzal et al., 2015) using standard techniques focusing on statistical significance. Here, we build a multilevel model of the same data, which we fit and validate using Bayesian techniques. Our method is to apply cumulative prospect theory on top of the statistical model to quantitatively connect our statistical analysis output to a practically meaningful context. This is then the basis both for assessing and arguing for practical significance.
Our study demonstrates that Bayesian analysis provides a technically rigorous yet practical framework for empirical software engineering. A substantial side effect is that any uncertainty in the underlying data will be propagated through the statistical model, and its effects on practical significance are made clear. Thus, in combination with cumulative prospect theory, Bayesian analysis supports seamlessly assessing practical significance in an empirical software engineering context, thus potentially clarifying and extending the relevance of research for practitioners.

@Article{TFFGGLE-TSE20-Bayes-practical,
  author =  {Richard Torkar and Carlo A. Furia and Robert Feldt and
             Francisco {Gomes de Oliveira Neto} and Lucas Gren and Per Lenberg and
             Neil A. Ernst},
  title =   {A Method to Assess and Argue for Practical Significance
             in Software Engineering},
  journal = {IEEE Transactions on Software Engineering},
  volume =  {48},
  number =  {6},
  pages =   {2053--2065},
  month =   {June},
  year =    {2022},
  note = {Online since January 2021.}
}

Richard Torkar, Robert Feldt, and Carlo A. Furia.
Bayesian Data Analysis in Empirical Software Engineering — The Case of Missing Data.
In Contemporary Empirical Methods in Software Engineering, Chapter 11, pp. 289–324.
Springer, August 2020.

Abstract
Bayesian data analysis (BDA) is today used by a multitude of research disciplines. These disciplines use BDA as a way to embrace uncertainty by using multilevel models and making use of all available information at hand. In this chapter, we first introduce the reader to BDA and then provide an example from empirical software engineering, where we also deal with a common issue in our field, i.e., missing data. The example we make use of presents the steps done when conducting state of the art statistical analysis. First, we need to understand the problem we want to solve. Second, we conduct causal analysis. Third, we analyze non-identifiability. Fourth, we conduct missing data analysis. Finally, we do a sensitivity analysis of priors. All this before we design our statistical model. Once we have a model, we present several diagnostics one can use to conduct sanity checks. We hope that through these examples, the reader will see the advantages of using BDA. This way, we hope Bayesian statistics will become more prevalent in our field, thus partly avoiding the reproducibility crisis we have seen in other disciplines.

  • The paper in PDF format (as arXiv pre-print) and its DOI.
@InCollection{TFF-MissingData20,
  author = 		  {Richard Torkar and Robert Feldt and Carlo A. Furia},
  title = 		  {{B}ayesian data analysis in empirical software engineering---{T}he case of missing data},
  booktitle = 	  {Contemporary Empirical Methods in Software Engineering},
  publisher = {Springer},
  year = 	  {2020},
  editor = 	  {Michael Felderer and Guilherme Horta Travassos},
chapter =   {11},
pages = {289--324},
month = {August}
}

Diego Marcilio, Carlo A. Furia, Rodrigo Bonifácio, and Gustavo Pinto.
SpongeBugs: Automatically Generating Fix Suggestions in Response to Static Code Analysis Warnings.
Journal of Systems and Software, 168:110671.
Elsevier, October 2020.

Abstract
Static code analysis tools such as FindBugs and SonarQube are widely used on open-source and industrial projects to detect a variety of issues that may negatively affect the quality of software. Despite these tools' popularity and high level of automation, several empirical studies report that developers normally fix only a small fraction (typically, less than 10%) of the reported issues — so-called "warnings". If these analysis tools could also automatically provide suggestions on how to fix the issues that trigger some of the warnings, their feedback would become more actionable and more directly useful to developers.
In this work, we investigate whether it is feasible to automatically generate fix suggestions for common warnings issued by static code analysis tools, and to what extent developers are willing to accept such suggestions into the codebases they are maintaining. To this end, we implemented SpongeBugs, a Java program transformation technique that fixes 11 distinct rules checked by two well-known static code analysis tools (SonarQube and SpotBugs). Fix suggestions are generated automatically based on templates, which are instantiated in a way that removes the source of the warnings; templates for some rules are even capable of producing multi-line patches. Based on the suggestions provided by SpongeBugs, we submitted 38 pull requests, including 946 fixes generated automatically by our technique for various open-source Java projects, including Eclipse UI — a core component of the Eclipse IDE — and both SonarQube and SpotBugs. Project maintainers accepted 87% of our fix suggestions (97% of them without any modifications). We further evaluated the applicability of our technique on software written by students and on a curated collection of bugs. All results indicate that our approach to generating fix suggestions is feasible, flexible, and can help increase the applicability of static code analysis tools.

  • The paper (updated, improved, and extended text) and its DOI.
  • This paper extends the SCAM paper with a similar title.
@Article{MFBP-JSS20,
  author = {Diego Marcilio and Carlo A. Furia and Rodribo Bonif{\'a}cio and Gustavo Pinto},
  title = {{S}ponge{B}ugs: Automatically Generating Fix Suggestions in Response to Static Code Analysis Warnings},
  journal = {Journal of Systems and Software},
  year = {2020},
  volume = {168},
  pages = {110671},
  month = {October},
  publisher = {Elsevier},
  note = {Online since June 2020.}
}

Tongtong Xu, Liushan Chen, Yu Pei, Tian Zhang, Minxue Pan, and Carlo A. Furia.
Restore: Retrospective Fault Localization Enhancing Automated Program Repair.
IEEE Transactions on Software Engineering, 48(1):309–326.
IEEE Computer Society, January 2022.

Abstract
Fault localization is a crucial step of automated program repair, because accurately identifying program locations that are most closely implicated with a fault greatly affects the effectiveness of the patching process. An ideal fault localization technique would provide precise information while requiring moderate computational resources — to best support an efficient search for correct fixes. In contrast, most automated program repair tools use standard fault localization techniques — which are not tightly integrated with the overall program repair process, and hence deliver only subpar efficiency.
In this paper, we present retrospective fault localization: a novel fault localization technique geared to the requirements of automated program repair. A key idea of retrospective fault localization is to reuse the outcome of failed patch validation to support mutation-based dynamic analysis — providing accurate fault localization information without incurring onerous computational costs.
We implemented retrospective fault localization in a tool called Restore — based on the Jaid Java program repair system. Experiments involving faults from the Defects4J standard benchmark indicate that retrospective fault localization can boost automated program repair: Restore efficiently explores a large fix space, delivering state-of-the-art effectiveness (41 Defects4J bugs correctly fixed, 8 of which no other automated repair tool for Java can fix) while simultaneously boosting performance (speedup over 3 compared to Jaid). Retrospective fault localization is applicable to any automated program repair techniques that rely on fault localization and dynamic validation of patches.

@Article{XCPZPF-TSE20-Restore,
  author =  {Tongtong Xu and Liushan Chen and Yu Pei
             and Tian Zhang and Minxue Pan and Carlo A. Furia},
  title =   {\textsc{Restore}: Retrospective Fault Localization
			    Enhancing Automated Program Repair},
  journal = {IEEE Transactions on Software Engineering},
  volume =  {48},
  number =  {1},
  pages =   {309--326},
  month =   {January},
  year =    {2022},
  note = {Online since April 2020.}
}

Liushan Chen, Yu Pei, and Carlo A. Furia.
Contract-Based Program Repair without The Contracts: An Extended Study.
IEEE Transactions on Software Engineering, 47(12):2841–2857.
IEEE Computer Society, December 2021.

Abstract
Most techniques for automated program repair (APR) use tests to drive the repair process; this makes them prone to generating spurious repairs that overfit the available tests unless additional information about expected program behavior is available. Our previous work on Jaid, an APR technique for Java programs, showed that constructing detailed state abstractions — similar to those employed by techniques for programs with contracts — from plain Java code without any special annotations provides valuable additional information, and hence helps mitigate the overfitting problem.
This paper extends the work on Jaid with a comprehensive experimental evaluation involving 693 bugs in three different benchmark suites. The evaluation shows, among other things, that: 1) Jaid is effective: it produced correct fixes for over 15% of all bugs, with a precision of nearly 60%; 2) Jaid is reasonably efficient: on average, it took less than 30 minutes to output a correct fix; 3) Jaid is competitive with the state of the art, as it fixed more bugs than any other technique, and 11 bugs that no other tool can fix; 4) Jaid is robust: its heuristics are complementary and their effectiveness does not depend on the fine-tuning of parameters. The experimental results also indicate the main trade-offs involved in designing an APR technique based on tests, as well as possible directions for further progress in this line of work.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • This paper extends the ASE paper with a similar title.
@Article{CPF-TSE20-Jaid,
  author =  {Liushan Chen and Yu Pei and Carlo A. Furia},
  title =   {Contract-Based Program Repair without The Contracts: An Extended Study},
  journal = {IEEE Transactions on Software Engineering},
  volume =  {47},
  number =  {12},
  pages =   {2841--2857},
  month =   {December},
  year =    {2021},
  note = {Online since January 2020.}
}

Carlo A. Furia, Robert Feldt, and Richard Torkar.
Bayesian Data Analysis in Empirical Software Engineering Research.
IEEE Transactions on Software Engineering, 47(9):1786–1810.
IEEE Computer Society, September 2021.

Abstract
Statistics comes in two main flavors: frequentist and Bayesian. For historical and technical reasons, frequentist statistics have traditionally dominated empirical data analysis, and certainly remain prevalent in empirical software engineering. This situation is unfortunate because frequentist statistics suffer from a number of shortcomings — such as lack of flexibility and results that are unintuitive and hard to interpret — that curtail their effectiveness when dealing with the heterogeneous data that is increasingly available for empirical analysis of software engineering practice.
In this paper, we pinpoint these shortcomings, and present Bayesian data analysis techniques that provide tangible benefits — as they can provide clearer results that are simultaneously robust and nuanced. After a short, high-level introduction to the basic tools of Bayesian statistics, we present the reanalysis of two empirical studies on the effectiveness of automatically generated tests and the performance of programming languages. By contrasting the original frequentist analyses with our new Bayesian analyses, we demonstrate the concrete advantages of the latter. To conclude we advocate a more prominent role for Bayesian statistical techniques in empirical software engineering research and practice.

@Article{FFT-TSE19-Bayes2,
  author =  {Carlo A. Furia and Robert Feldt and Richard Torkar},
  title =   {{B}ayesian Data Analysis in Empirical Software Engineering Research},
  journal = {IEEE Transactions on Software Engineering},
  volume =  {47},
  number =  {9},
  pages =   {1786--1810},
  month =   {September},
  year =    {2021},
  note = {Publication date: August 2019.}
}

Diego Marcilio, Carlo A. Furia, Rodrigo Bonifácio, and Gustavo Pinto.
Automatically Generating Fix Suggestions in Response to Static Code Analysis Warnings.
In Proceedings of the 19th International Working Conference on Source Code Analysis and Manipulation (SCAM).
Pgg. 34–44, IEEE Computer Society, September 2019.

Abstract
Static code analysis tools such as FindBugs and SonarQube are widely used on open-source and industrial projects to detect a variety of issues that may negatively affect the quality of software. Despite these tools' popularity and high level of automation, several empirical studies report that developers normally fix only a small fraction (typically, less than 10%) of the reported issues — so-called "warnings". If these analysis tools could also automatically provide suggestions on how to fix the issues that trigger some of the warnings, their feedback would become more actionable and more directly useful to developers.
In this work, we investigate whether it is feasible to automatically generate fix suggestions for common warnings issued by static code analysis tools, and to what extent developers are willing to accept such suggestions into the codebases they're maintaining. To this end, we implemented a Java program transformation technique that fixes 11 distinct rules checked by two well-known static code analysis tools (SonarQube and SpotBugs). Fix suggestions are generated automatically based on templates, which are instantiated in a way that removes the source of the warnings; templates for some rules are even capable of producing multi-line patches. We submitted 38 pull requests, including 920 fixes generated automatically by our technique for various open-source Java projects, including the Eclipse IDE and both SonarQube and SpotBugs tools. At the time of writing, project maintainers accepted 84% of our fix suggestions (95% of them without any modifications). These results indicate that our approach to generating fix suggestions is feasible, and can help increase the applicability of static code analysis tools.

  • The paper (updated, improved, and extended text) and its DOI.
  • The tool SpongeBugs described in the paper and the data of the experimental evaluation are available.
@InProceedings{MFBP-SCAM19,
  author = {Diego Marcilio and Carlo A. Furia and Rodrigo Bonif{\'a}cio and Gustavo Pinto},
  title = {Automatically Generating Fix Suggestions in Response to Static Code Analysis Warnings},
  booktitle = {Proceedings of the 18th International Working Conference on Source Code Analysis and Manipulation (SCAM)},
  pages = {34--44},
  year = {2019},
  month = {September},
  publisher = {IEEE Computer Society},
  acceptancerate = {40\%}
}

Francisco Gomes de Oliveira Neto, Richard Torkar, Robert Feldt, Lucas Gren, Carlo A. Furia, and Ziwei Huang.
Evolution of statistical analysis in empirical software engineering research: Current state and steps forward.
Journal of Systems and Software, 156:246–267.
Elsevier, October 2019.

Abstract
Software engineering research is evolving and papers are increasingly based on empirical data from a multitude of sources, using statistical tests to determine if and to what degree empirical evidence supports their hypotheses. To investigate the practices and trends of statistical analysis in empirical software engineering (ESE), this paper presents a review of a large pool of papers from top-ranked software engineering journals. First, we manually reviewed 161 papers and in the second phase of our method, we conducted a more extensive semi-automatic classification of papers spanning the years 2001–2015 and 5,196 papers.
Results from both review steps was used to: i) identify and analyse the predominant practices in ESE (e.g., using t-test or ANOVA), as well as relevant trends in usage of specific statistical methods (e.g., nonparametric tests and effect size measures) and, ii) develop a conceptual model for a statistical analysis workflow with suggestions on how to apply different statistical methods as well as guidelines to avoid pitfalls.
Lastly, we confirm existing claims that current ESE practices lack a standard to report practical significance of results. We illustrate how practical significance can be discussed in terms of both the statistical analysis and in the practitioner's context.

@Article{GTFGFH-JSS19,
  author = {Francisco {Gomes de Oliveira Neto} and Richard Torkar and Robert Feldt and Lucas Gren and Carlo A. Furia and Ziwei Huang},
  title = {Evolution of statistical analysis in empirical software engineering research: Current state and steps forward},
  journal = {Journal of Systems and Software},
  year = {2019},
  volume = {156},
  pages = {246--267},
  month = {October},
  publisher = {Elsevier},
  note = {Online since July 2019.}
}

Carlo A. Furia and Kirsten Winter, editors.
Integrated Formal Methods – 14th International Conference, IFM 2018. Maynooth, Ireland, September 5–7, 2018. Proceedings.
Lecture Notes in Computer Science 11023, Springer, 2018.

@proceedings{FW-iFM18,
  editor    = {Carlo A. Furia and Kirsten Winter},
  title     = {Integrated Formal Methods --
               14th International Conference, IFM 2018.
               Maynooth, Ireland, September 5--7, 2018.},
  booktitle = {IFM 2018},
  publisher = {Springer},
  series    = {Lecture Notes in Computer Science},
  volume    = {11023},
  year      = {2018}
}

YuTing Chen and Carlo A. Furia.
Robustness Testing of Intermediate Verifiers.
In Proceedings of the 16th International Symposium on Automated Technology for Verification and Analysis (ATVA).
Lecture Notes in Computer Science, 11138:91–108, Springer, September 2018.

Abstract
Program verifiers are not exempt from the bugs that affect nearly every piece of software. In addition, they often exhibit brittle behavior: their performance changes considerably with details of how the input program is expressed — details that should be irrelevant, such as the order of independent declarations. Such a lack of robustness frustrates users who have to spend considerable time figuring out a tool's idiosyncrasies before they can use it effectively.
This paper introduces a technique to detect lack of robustness of program verifiers; the technique is lightweight and fully automated, as it is based on testing methods (such as mutation testing and metamorphic testing). The key idea is to generate many simple variants of a program that initially passes verification. All variants are, by construction, equivalent to the original program; thus, any variant that fails verification indicates lack of robustness in the verifier.
We implemented our technique in a tool called μgie, which operates on programs written in the popular Boogie language for verification — used as intermediate representation in numerous program verifiers. Experiments targeting 135 Boogie programs indicate that brittle behavior occurs fairly frequently (16 programs) and is not hard to trigger. Based on these results, the paper discusses the main sources of brittle behavior and suggests means of improving robustness.

  • The paper in PDF format (updated and improved text) and its DOI.
  • The tool μgie described in the paper.
  • An extended version with a few more details is available as technical report.
@InProceedings{CF-ATVA18,
  author = {YuTing Chen and Carlo A. Furia},
  title = {Robustness Testing of Intermediate Verifiers},
  booktitle = {Proceedings of the 16th International Symposium on Automated Technology for Verification and Analysis (ATVA)},
  year = {2018},
  editor = {Shuvendu Lahiri and Chao Wang},
  series = {Lecture Notes in Computer Science},
  pages = {91--108},
  volume = {11138},
  month = {September},
  publisher = {Springer},
  acceptancerate = {32\%}
}

Liushan Chen, Yu Pei, and Carlo A. Furia.
Contract-Based Program Repair Without the Contracts.
In Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE).
Pgg. 637–647, ACM, October 2017.

Abstract
Automated program repair (APR) is a promising approach to automatically fixing software bugs. Most APR techniques use tests to drive the repair process; this makes them readily applicable to realistic code bases, but also brings the risk of generating spurious repairs that overfit the available tests. Some techniques addressed the overfitting problem by targeting code using contracts (such as pre- and postconditions), which provide additional information helpful to characterize the states of correct and faulty computations; unfortunately, mainstream programming languages do not normally include contract annotations, which severely limits the applicability of such contract-based techniques.
This paper presents Jaid, a novel APR technique for Java programs, which is capable of constructing detailed state abstractions — similar to those employed by contract-based techniques — that are derived from regular Java code without any special annotations. Grounding the repair generation and validation processes on rich state abstractions mitigates the overfitting problem, and helps extend APR's applicability: in experiments with the Defects4J benchmark, a prototype implementation of Jaid produced genuinely correct repairs, equivalent to those written by programmers, for 25 bugs — improving over the state of the art of comparable Java APR techniques in the number and kinds of correct fixes.

  • The paper in PDF format (updated and improved text) and its DOI.
  • Jaid is available for download.
@InProceedings{CPF-ASE17,
  author = {Liushan Chen and Yu Pei and Carlo A. Furia},
  title = {Contract-Based Program Repair without the Contracts},
  booktitle = {Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE)},
  year = {2017},
  editor = {Massimiliano {Di Penta} and Tien N. Nguyen},
  month = {October},
  publisher = {ACM},
  pages = {637--647},
  acceptancerate = {21\%}
}

Nadia Polikarpova, Julian Tschannen, and Carlo A. Furia.
A Fully Verified Container Library.
Formal Aspects of Computing, 30(5):495–523.
Springer, September 2018.

Abstract
The comprehensive functionality and nontrivial design of realistic general-purpose container libraries pose challenges to formal verification that go beyond those of individual benchmark problems mainly targeted by the state of the art. We present our experience verifying the full functional correctness of EiffelBase2: a container library offering all the features customary in modern language frameworks, such as external iterators, and hash tables with generic mutable keys and load balancing. Verification uses the automated deductive verifier AutoProof, which we extended as part of the present work. Our results indicate that verification of a realistic container library (135 public methods, 8,400 LOC) is possible with moderate annotation overhead (1.4 lines of specification per LOC) and good performance (0.2 seconds per method on average).

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • This paper extends the FM paper with the same title; it was an invited submission to a special issue of FAOC with the best papers of FM 2015.
  • The source code of the verified library EiffelBase2 is available in this repository, where you can also run verification of selected classes using AutoProof.
@Article{PTF-FAOC17,
  author = {Nadia Polikarpova and Julian Tschannen and Carlo A. Furia},
  title = {A Fully Verified Container Library},
  journal = {Formal Aspects of Computing},
  year = {2018},
  volume = {30},
  number = {5}, 
  pages = {495--523},
  month = {September},
  publisher = {Springer},
  note = 	  {Online since September 2017. Special issue with invited papers from FM 2015.}

}

YuTing Chen and Carlo A. Furia.
Triggerless Happy — Intermediate Verification with a First-Order Prover.
In Proceedings of the 13th International Conference on integrated Formal Methods (iFM).
Lecture Notes in Computer Science, 10510:295–311, Springer, September 2017.

Abstract
SMT solvers have become de rigueur in deductive verification to automatically prove the validity of verification conditions. While these solvers provide an effective support for theories — such as arithmetic — that feature strongly in program verification, they tend to be more limited in dealing with first-order quantification, for which they have to rely on special annotations — known as triggers — to guide the instantiation of quantifiers. Writing effective triggers is necessary to achieve satisfactory performance with SMT solvers, but remains a tricky endeavor — beyond the purview of non-highly trained experts.
In this paper, we experiment with the idea of using first-order provers instead of SMT solvers to prove the validity of verification conditions. First-order provers offer a native support for unrestricted quantification, but have been traditionally limited in theory reasoning. By leveraging some recent extensions to narrow this gap in the Vampire first-order prover, we describe a first-order encoding of verification conditions of programs written in the Boogie intermediate verification language. Experiments with a prototype implementation on a variety of Boogie programs suggest that first-order provers can help achieve more flexible and robust performance in program verification, while avoiding the pitfalls of having to manually guide instantiations by means of triggers.

  • The paper in PDF format (updated and improved text) and its DOI.
  • The tool BLT described in the paper.
  • This paper won the best paper award at iFM 2017.
@InProceedings{CF-iFM17,
  author = {YuTing Chen and Carlo A. Furia},
  title = {Triggerless Happy -- Intermediate Verification with a First-Order Prover},
  booktitle = {Proceedings of the 13th International Conference on integrated Formal Methods (iFM)},
  year = {2017},
  editor = {Nadia Polikarpova and Steve Schneider},
  series = {Lecture Notes in Computer Science},
  pages = {295--311},
  volume = {10510},
  month = {September},
  publisher = {Springer},
  acceptancerate = {37\%},
  note = {Best paper award}
}

Carlo A. Furia.
What Good is Bayesian Data Analysis for Software Engineering?
In Proceedings of the 39th International Conference on Software Engineering (ICSE) Companion.
Pgg. 374–376, ACM, May 2017 (Posters Track).

@InProceedings{F-ICSE17-poster,
  author = {Carlo A. Furia},
  title = {What Good is {B}ayesian Data Analysis for Software Engineering?},
  booktitle = {Proceedings of the 39th International Conference on Software Engineering (ICSE) Companion},
  pages = {374--376},
  year = {2017},
  month = {May},
  publisher = {ACM},
  note = {Invited submission to the Posters Track}
}

Carlo A. Furia and Paola Spoletini.
Bounded Variability of Metric Temporal Logic.
Annals of Mathematics and Artificial Intelligence, 80(3):283–316.
Springer, August 2017.

Abstract
Deciding validity of Metric Temporal Logic (MTL) formulas is generally very complex and even undecidable over dense time domains; bounded variability is one of the several restrictions that have been proposed to bring decidability back. A temporal model has bounded variability if no more than v events occur over any time interval of length V, for constant parameters v and V. Previous work has shown that MTL validity over models with bounded variability is less complex — and often decidable — than MTL validity over unconstrained models.
This paper studies the related problem of deciding whether an MTL formula has intrinsic bounded variability, that is whether it is satisfied only by models with bounded variability. The results of the paper are mainly negative: over dense time domains, the problem is mostly undecidable (even if with an undecidability degree that is typically lower than deciding validity); over discrete time domains, it is decidable with the same complexity as deciding validity. As a partial complement to these negative results, the paper also identifies MTL fragments where deciding bounded variability is simpler than validity, which may provide for a reduction in complexity in some practical cases.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • This paper extends the TIME paper with the same title; it was an invited submission to a special issue of AMAI with the best papers of TIME 2014.
@Article{FS-AMAI16,
  author = 		  {Carlo A. Furia and Paola Spoletini},
  title = 		  {Bounded Variability of Metric Temporal Logic},
  journal = 	  {Annals of Mathematics and Artificial Intelligence},
  year = 		  {2017},
  volume = {80},
  number = {3}, 
  pages = {283--316},
  month = {August},
  publisher = {Springer},
  note = {Online since December 2016. Special issue with invited papers from TIME 2014.}
}

Bernhard K. Aichernig and Carlo A. Furia, editors.
Tests and Proofs – 10th International Conference, TAP 2016. Held as Part of STAF 2016, Vienna, Austria, July 5–7, 2016. Proceedings.
Lecture Notes in Computer Science 9762, Springer, 2016.

From the Preface
The TAP conference promotes research in verification and formal methods that targets the interplay of proofs and testing: the advancement of techniques of each kind and their combination, with the ultimate goal of improving software and system dependability. This volume contains the proceedings of TAP 2016, which marks a decade of TAP conferences since the first edition in 2007.

@proceedings{AF-TAP16,
  editor    = {Bernhard K. Aichernig and Carlo A. Furia},
  title     = {Tests and Proofs --
               10th International Conference, TAP 2016.
               Held as Part of STAF 2016, Vienna, Austria, July 5--7, 2016.
               Proceedings},
  booktitle = {TAP 2016},
  publisher = {Springer},
  series    = {Lecture Notes in Computer Science},
  volume    = {9762},
  year      = {2016}
}

Carlo A. Furia, Martin Nordio, Nadia Polikarpova, and Julian Tschannen.
AutoProof: Auto-active Functional Verification of Object-oriented Programs.
International Journal on Software Tools for Technology Transfer, 19(6):697–716.
Springer, October 2017.

Abstract
Auto-active verifiers provide a level of automation intermediate between fully automatic and interactive: users supply code with annotations as input while benefiting from a high level of automation in the back-end. This paper presents AutoProof, a state-of-the-art auto-active verifier for object-oriented sequential programs with complex functional specifications. AutoProof fully supports advanced object-oriented features and a powerful methodology for framing and class invariants, which make it applicable in practice to idiomatic object-oriented patterns. The paper focuses on describing AutoProof's interface, design, and implementation features, and demonstrates AutoProof's performance on a rich collection of benchmark problems. The results attest AutoProof's competitiveness among tools in its league on cutting-edge functional verification of object-oriented programs.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • This paper extends the TACAS paper with the same title; it was an invited submission to a special issue of STTT with the best papers of TACAS 2015.
  • AutoProof is available online in ComCom.
@Article{FNPT-STTT16,
  author = {Carlo A. Furia and Martin Nordio and Nadia Polikarpova and Julian Tschannen},
  title = {{A}uto{P}roof: Auto-active Functional Verification of Object-oriented Programs},
  journal = {International Journal on Software Tools for Technology Transfer},
  year = {2016},
  volume = {19},
  number = {6}, 
  pages = {697--716},
  month = {October},
  publisher = {Springer},
  note = {Online since April 2016. Special issue with invited papers from TACAS 2015.}
}

Michael Ameri and Carlo A. Furia.
Why Just Boogie? Translating Between Intermediate Verification Languages.
In Proceedings of the 12th International Conference on integrated Formal Methods (iFM).
Lecture Notes in Computer Science, 9681:1–17, Springer, June 2016.

Abstract
The verification systems Boogie and Why3 use their respective intermediate languages to generate verification conditions from high-level programs. Since the two systems support different back-end provers (such as Z3 and Alt-Ergo) and are used to encode different high-level languages (such as C# and Java), being able to translate between their intermediate languages would provide a way to reuse one system's features to verify programs meant for the other. This paper describes a translation of Boogie into WhyML (Why3's intermediate language) that preserves semantics, verifiability, and program structure to a large degree. We implemented the translation as a tool and applied it to 194 Boogie-verified programs of various sources and sizes; Why3 verified 83% of the translated programs with the same outcome as Boogie. These results indicate that the translation is often effective and practically applicable.

  • The paper in PDF format (updated and improved text) and its DOI.
  • An extended version is available as technical report.
@InProceedings{AF-iFM16,
  author = {Michael Ameri and Carlo A. Furia},
  title = {{Why} Just {Boogie}? {T}ranslating Between Intermediate Verification Languages},
  booktitle = {Proceedings of the 12th International Conference on integrated Formal Methods (iFM)},
  year = {2016},
  editor = {Erika {\'A}brah{\'a}m and Marieke Huisman},
  series = {Lecture Notes in Computer Science},
  pages = {1--17},
  volume = {9681},
  month = {June},
  publisher = {Springer},
  acceptancerate = {30\%},
}

Carlo A. Furia, Christopher M. Poskitt, and Julian Tschannen.
The AutoProof Verifier: Usability by Non-Experts and on Standard Code.
In Proceedings of the 2nd Workshop on Formal Integrated Development Environment (F-IDE)
Electronic Proceedings in Theoretical Computer Science, 187:42–55, June 2015.

Abstract
Formal verification tools are often developed by experts for experts; as a result, their usability by programmers with little formal methods experience may be severely limited. In this paper, we discuss this general phenomenon with reference to AutoProof: a tool that can verify the full functional correctness of object-oriented software. In particular, we present our experiences of using AutoProof in two contrasting contexts representative of non-expert usage. First, we discuss its usability by students in a graduate course on software verification, who were tasked with verifying implementations of various sorting algorithms. Second, we evaluate its usability in verifying code developed for programming assignments of an undergraduate course. The first scenario represents usability by serious non-experts; the second represents usability on "standard code", developed without full functional verification in mind. We report our experiences and lessons learnt, from which we derive some general suggestions for furthering the development of verification tools with respect to improving their usability.

  • The paper in PDF format (updated and improved text) and its DOI.
@InProceedings{FPT-FIDE15,
  author = {Carlo A. Furia and Christopher M. Poskitt and Julian Tschannen},
  title = {The {AutoProof} Verifier: Usability by Non-Experts and on Standard Code},
  booktitle = {Proceedings of the 2nd Workshop on Formal Integrated Development Environment (F-IDE)},
  year = {2015},
  editor = {Catherine Dubois and Paolo Masci and Dominique Mery},
  series = {Electronic Proceedings in Theoretical Computer Science},
  volume = {187},
  pages = {42--55},
  month = {June},
  publisher = {EPTCS},
  note = {Workshop co-located with FM 2015},
}

Juan P. Galeotti, Carlo A. Furia, Eva May, Gordon Fraser, and Andreas Zeller.
Inferring Loop Invariants by Mutation, Dynamic Analysis, and Static Checking.
IEEE Transactions on Software Engineering, 41(10):1019–1037.
IEEE Computer Society, October 2015.

Abstract
Verifiers that can prove programs correct against their full functional specification require, for programs with loops, additional annotations in the form of loop invariants — properties that hold for every iteration of a loop. We show that significant loop invariant candidates can be generated by systematically mutating postconditions; then, dynamic checking (based on automatically generated tests) weeds out invalid candidates, and static checking selects provably valid ones. We present a framework that automatically applies these techniques to support a program prover, paving the way for fully automatic verification without manually written loop invariants: Applied to 28 methods (including 39 different loops) from various java.util classes (occasionally modified to avoid using Java features not fully supported by the static checker), our DynaMate prototype automatically discharged 97% of all proof obligations, resulting in automatic complete correctness proofs of 25 out of the 28 methods — outperforming several state-of-the-art tools for fully automatic verification.

@Article{GFMFZ-TSE15-DynaMate,
  author = 		  {Juan Pablo Galeotti and Carlo A. Furia and Eva May and Gordon Fraser and Andreas Zeller},
  title = 		  {Inferring Loop Invariants by Mutation, Dynamic Analysis, and Static Checking},
  journal = 	  {IEEE Transactions on Software Engineering},
  volume = 	  {41},
  number = 	  {10},
  pages = 	  {1019--1037},
  month = 	  {October},
  year = 		  {2015},
}

Nadia Polikarpova, Julian Tschannen, and Carlo A. Furia.
A Fully Verified Container Library.
In Proceedings of the 20th International Symposium on Formal Methods (FM).
Lecture Notes in Computer Science, 9109:414–434, Springer, June 2015.

Abstract
The comprehensive functionality and nontrivial design of realistic general-purpose container libraries pose challenges to formal verification that go beyond those of individual benchmark problems mainly targeted by the state of the art. We present our experience verifying the full functional correctness of EiffelBase2: a container library offering all the features customary in modern language frameworks, such as external iterators, and hash tables with generic mutable keys and load balancing. Verification uses the automated deductive verifier AutoProof, which we extended as part of the present work. Our results indicate that verification of a realistic container library (135 public methods, 8,400 LOC) is possible with moderate annotation overhead (1.4 lines of specification per LOC) and good performance (0.2 seconds per method on average).

  • The paper in PDF format (updated and improved text) and its DOI.
  • The source code of the verified library EiffelBase2 is available in this repository, where you can also run verification of selected classes using AutoProof.
  • This paper won the best paper award at FM 2015.
@InProceedings{PTF-FM15,
  author = {Nadia Polikarpova and Julian Tschannen and Carlo A. Furia},
  title = {A Fully Verified Container Library},
  booktitle = {Proceedings of the 20th International Symposium on Formal Methods (FM)},
  year = {2015},
  editor = {Nikolaj Bj{\o}rner and Frank D. {de Boer}},
  series = {Lecture Notes in Computer Science},
  pages = {414--434},
  volume = {9109},
  month = {June},
  publisher = {Springer},
  acceptancerate = {26\%},
  note = {Best paper award}
}

Yu Pei, Carlo A. Furia, Martin Nordio, and Bertrand Meyer.
Automated Program Repair in an Integrated Development Environment.
In Proceedings of the 37th International Conference on Software Engineering (ICSE).
Pgg. 681–684, ACM, May 2015 (Demonstrations Track in Volume 2).

Abstract
We present the integration of the AutoFix automated program repair technique into the EiffelStudio Development Environment. AutoFix presents itself like a recommendation system capable of automatically finding bugs and suggesting fixes in the form of source-code patches.Its performance suggests usage scenarios where it runs in the background or during work interruptions, displaying fix suggestions as they become available. This is a contribution towards the vision of semantic Integrated Development Environments, which offer powerful automated functionality within interfaces familiar to developers. A screencast highlighting the main features of AutoFix can be found at: http://youtu.be/Ff2ULiyL-80.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • The AutoFix project page.
  • A five-minute demo screencast of AutoFix integrated in the EiffelStudio IDE.
@InProceedings{PFNM-ICSE15-demos,
  author = {Yu Pei and Carlo A. Furia and Martin Nordio and Bertrand Meyer},
  title = {Automated Program Repair in an Integrated Development Environment},
  booktitle = {Proceedings of the 37th International Conference on Software Engineering (ICSE) -- Volume 2},
  pages = {681--684},
  year = {2015},
  month = {May},
  publisher = {ACM},
  note = {Demonstrations Track},
  acceptancerate = {59\%}
}

Sebastian Nanz and Carlo A. Furia.
A Comparative Study of Programming Languages in Rosetta Code.
In Proceedings of the 37th International Conference on Software Engineering (ICSE).
Pgg. 778–788, ACM, May 2015.

Abstract
Sometimes debates on programming languages are more religious than scientific. Questions about which language is more succinct or efficient, or makes developers more productive are discussed with fervor, and their answers are too often based on anecdotes and unsubstantiated beliefs. In this study, we use the largely untapped research potential of Rosetta Code, a code repository of solutions to common programming tasks in various languages, to draw a fair and well-founded comparison. Rosetta Code offers a large data set for analysis. Our study is based on 7087 solution programs corresponding to 745 tasks in 8 widely used languages representing the major programming paradigms (procedural: C and Go; object-oriented: C# and Java; functional: F# and Haskell; scripting: Python and Ruby). Our statistical analysis reveals, most notably, that: functional and scripting languages are more concise than procedural and object-oriented languages; C is hard to beat when it comes to raw speed on large inputs, but performance differences over inputs of moderate size are less pronounced and allow even interpreted languages to be competitive; compiled strongly-typed languages, where more defects can be caught at compile time, are less prone to runtime failures than interpreted or weakly-typed languages. We discuss implications of these results for developers, language designers, and educators.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • An extended version with plenty of supplementary material is available as technical report.
  • A repository with the snapshot of Rosetta Code analyzed in the paper, together with meta-data and our analysis scripts.
  • A blog post with an informal summary of the paper's main results.
  • Slashdot and Hacker News news items about a preliminary version of the paper, with readers' comments.
@InProceedings{NF-ICSE15,
  author = {Sebastian Nanz and Carlo A. Furia},
  title = {A Comparative Study of Programming Languages in {R}osetta {C}ode},
  booktitle = {Proceedings of the 37th International Conference on Software Engineering (ICSE)},
  editor    = {Antonia Bertolino and Gerardo Canfora and Sebastian Elbaum},
  pages = {778--788},
  year = {2015},
  month = {May},
  publisher = {ACM},
  acceptancerate = {18\%}
}

Julian Tschannen, Carlo A. Furia, Martin Nordio, and Nadia Polikarpova.
AutoProof: Auto-active Functional Verification of Object-oriented Programs.
In Proceedings of the 21st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS).
Lecture Notes in Computer Science, 9035:566–580, Springer, April 2015.

Abstract
Auto-active verifiers provide a level of automation intermediate between fully automatic and interactive: users supply code with annotations as input while benefiting from a high level of automation in the back-end. This paper presents AutoProof, a state-of-the-art auto-active verifier for object-oriented sequential programs with complex functional specifications. AutoProof fully supports advanced object-oriented features and a powerful methodology for framing and class invariants, which make it applicable in practice to idiomatic object-oriented patterns. The paper focuses on describing AutoProof's interface, design, and implementation features, and demonstrates AutoProof's performance on a rich collection of benchmark problems. The results attest AutoProof's competitiveness among tools in its league on cutting-edge functional verification of object-oriented programs.

@InProceedings{TFNP-TACAS15,
  author = {Julian Tschannen and Carlo A. Furia and Martin Nordio and Nadia Polikarpova},
  title = {{A}uto{P}roof: Auto-active Functional Verification of Object-oriented Programs},
  booktitle = {Proceedings of the 21st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS)},
  year = {2015},
  editor = {Christel Baier and Cesare Tinelli},
  series = {Lecture Notes in Computer Science},
  month = {April},
  publisher = {Springer},
  pages = {566--580},
  volume = {9035},
  acceptancerate = {23\%}
}

Juan P. Galeotti, Carlo A. Furia, Eva May, Gordon Fraser, and Andreas Zeller.
DynaMate: Dynamically Inferring Loop Invariants for Automatic Full Functional Verification.
In Proceedings of the 10th Haifa Verification Conference (HVC).
Lecture Notes in Computer Science, 8855:48–53, Springer, November 2014 (Tool paper).

Abstract
DynaMate is a tool that automatically infers loop invariants and uses them to prove Java programs correct with respect to a given JML functional specification. DynaMate improves the flexibility of loop invariant inference by integrating static (proving) and dynamic (testing) techniques with the goal of combining their complementary strengths. In an experimental evaluation involving 26 Java methods of java.util annotated with JML pre- and postconditions, it automatically discharged over 97% of all proof obligations, resulting in automatic complete correctness proofs of 23 out of the 26 methods.

  • The paper in PDF format (updated and improved text) and its DOI.
  • DynaMate's techniques and its experimental evaluation are described in a technical report.
  • DynaMate's project page.
@InProceedings{GFMFZ-HVC14-DynaMate,
  author = {Juan P. Galeotti and Carlo A. Furia and Eva May and Gordon Fraser and Andreas Zeller},
  title = {{D}yna{M}ate: Dynamically Inferring Loop Invariants for Automatic Full Functional Verification},
  booktitle = {Proceedings of the 10th Haifa Verification Conference (HVC)},
  year = {2014},
  editor = {Eran Yahav},
  series = {Lecture Notes in Computer Science},
  pages = {48--53},
  volume = {8855},
  month = {November},
  publisher = {Springer},
  note = {Tool paper},
  acceptancerate = {49\%}
}

Carlo A. Furia and Paola Spoletini.
Bounded Variability of Metric Temporal Logic.
In Proceedings of the 21st International Symposium on Temporal Representation and Reasoning (TIME'14).
Pgg. 155–163, IEEE Computer Society, September 2014.

Abstract
Previous work has shown that reasoning with real-time temporal logics is often simpler when restricted to models with bounded variability—where no more than v events may occur every V time units, for given v, V. When reasoning about formulas with intrinsic bounded variability, one can employ the simpler techniques that rely on bounded variability, without any loss of generality. What is then the complexity of algorithmically deciding which formulas have intrinsic bounded variability?
In this paper, we study the problem with reference to Metric Temporal Logic (MTL). We prove that deciding bounded variability of MTL formulas is undecidable over dense-time models, but with a undecidability degree lower than generic dense-time MTL satisfiability. Over discrete-time models, instead, deciding MTL bounded variability has the same exponential-space complexity as satisfiability. To complement these negative results, we also briefly discuss small fragments of MTL that are more amenable to reasoning about bounded variability.

@InProceedings{FS-TIME14,
  author = {Carlo A. Furia and Paola Spoletini},
  title = {Bounded Variability of Metric Temporal Logic},
  booktitle = {Proceedings of the 21st International Symposium on Temporal Representation and Reasoning (TIME'14)},
  pages = {155--163},
  editor    = {Amedeo Cesta and Carlo Combi and Francois Laroussinie},
  year = {2014},
  month = {September},
  publisher = {IEEE Computer Society},
  acceptancerate = {53\%}
}

Yu Pei, Carlo A. Furia, Martin Nordio, Yi Wei, Bertrand Meyer, and Andreas Zeller.
Automated Fixing of Programs with Contracts.
IEEE Transactions on Software Engineering, 40(5):427–449.
IEEE Computer Society, May 2014.

Abstract
This paper describes AutoFix, an automatic debugging technique that can fix faults in general-purpose software. To provide high-quality fix suggestions and to enable automation of the whole debugging process, AutoFix relies on the presence of simple specification elements in the form of contracts (such as pre- and postconditions). Using contracts enhances the precision of dynamic analysis techniques for fault detection and localization, and for validating fixes. The only required user input to the AutoFix supporting tool is then a faulty program annotated with contracts; the tool produces a collection of validated fixes for the fault ranked according to an estimate of their suitability.
In an extensive experimental evaluation, we applied AutoFix to over 200 faults in four code bases of different maturity and quality (of implementation and of contracts). AutoFix successfully fixed 42% of the faults, producing, in the majority of cases, corrections of quality comparable to those competent programmers would write; the used computational resources were modest, with an average time per fix below 20 minutes on commodity hardware. These figures compare favorably to the state of the art in automated program fixing, and demonstrate that the AutoFix approach is successfully applicable to reduce the debugging burden in real-world scenarios.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • An extended version with appendix is available as technical report.
  • This journal paper combines and extends the previous publications [ISSTA'10] and [ASE'11] on AutoFix. In particular, it describes the latest AutoFix implementation and includes a detailed experimental evaluation.
  • The AutoFix project page.
@Article{PFNWMZ-TSE14,
  author = {Yu Pei and Carlo A. Furia and Martin Nordio and Yi Wei and Bertrand Meyer and Andreas Zeller},
  title = {Automated Fixing of Programs with Contracts},
  journal = {IEEE Transactions on Software Engineering},
  volume = {40},
  number = {5},
  pages = {427--449},
  month = {May},
  year = {2014}
}

H.-Christian Estler, Martin Nordio, Carlo A. Furia, and Bertrand Meyer.
Awareness and Merge Conflicts in Distributed Software Development.
In Proceedings of the 9th International Conference on Global Software Engineering (ICGSE).
Pgg. 26–35, IEEE Computer Society, August 2014.

Abstract
Collaborative software development requires programmers to coordinate their work and merge individual contributions into a consistent shared code base. Traditionally, coordination follows a series of "update-modify-commit" cycles, where merge conflicts arise upon committing if individual modifications have diverged and must be explicitly reconciled. Researchers have been suggesting that providing timely awareness information about "who's changing what" may not only help deal with conflicts but, more generally, improve the effectiveness of collaboration.
This paper investigates the impact of awareness information in the context of globally distributed software development. Based on an analysis of data from 105 student developers constituting 12 development teams located in different countries, we analyze, among other things: 1) the frequency of merge conflicts and insufficient awareness; 2) the impact of distribution on team awareness; 3) the perceived impact of conflicts and lack of awareness on productivity, motivation, and project punctuality. Our findings include: 1) lack of awareness occurs more frequently than merge conflicts; 2) information about remote team members is missing roughly as often as information about co-located ones; 3) insufficient awareness information affects more negatively programmer's performance than merge conflicts.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • This paper won the best paper award at ICGSE 2014.
@InProceedings{ENFM-ICGSE14,
  author = {H.-Christian Estler and Martin Nordio and Carlo A. Furia and Bertrand Meyer},
  title = {Awareness and Merge Conflicts in Distributed Software Development},
  booktitle = {Proceedings of the 9th International Conference on Global Software Engineering (ICGSE)},
  editor    = {Yuanfang Cai and Jude Fernandez and Wenyun Zhao},
  pages = {26--35},
  year = {2014},
  month = {August},
  publisher = {IEEE Computer Society},
  acceptancerate = {36\%},
  note = {Best paper award}
}

Nadia Polikarpova, Julian Tschannen, Carlo A. Furia, and Bertrand Meyer.
Flexible Invariants Through Semantic Collaboration.
In Proceedings of the 19th International Symposium on Formal Methods (FM).
Lecture Notes in Computer Science, 8442:514–530, Springer, May 2014.

Abstract
Modular reasoning about class invariants is challenging in the presence of collaborating objects that need to maintain global consistency. This paper presents semantic collaboration: a novel methodology to specify and reason about class invariants of sequential object-oriented programs, which models dependencies between collaborating objects by semantic means. Combined with a simple ownership mechanism and useful default schemes, semantic collaboration achieves the flexibility necessary to reason about complicated inter-object dependencies but requires limited annotation burden when applied to standard specification patterns. The methodology is implemented in AutoProof, our program verifier for the Eiffel programming language (but it is applicable to any language supporting some form of representation invariants). An evaluation on several challenge problems proposed in the literature demonstrates that it can handle a variety of idiomatic collaboration patterns, and is more widely applicable than the existing invariant methodologies.

  • The paper in PDF format (updated and improved text) and its DOI.
  • An extended version is available as technical report.
  • Supplementary material and online examples are available on the SC project page.
@InProceedings{PTFM-FM14-SemiCola,
  author = {Nadia Polikarpova and Julian Tschannen and Carlo A. Furia and Bertrand Meyer},
  title = {Flexible Invariants Through Semantic Collaboration},
  booktitle = {Proceedings of the 19th International Symposium on Formal Methods (FM)},
  year = {2014},
  editor = {Cliff B. Jones and Pekka Pihlajasaari and Jun Sun},
  series = {Lecture Notes in Computer Science},
  pages = {514--530},
  volume = {8442},
  month = {May},
  publisher = {Springer},
  acceptancerate = {28\%}
}

H.-Christian Estler, Carlo A. Furia, Martin Nordio, Marco Piccioni, and Bertrand Meyer.
Contracts in Practice.
In Proceedings of the 19th International Symposium on Formal Methods (FM).
Lecture Notes in Computer Science, 8442:230–246, Springer, May 2014.

Abstract
Contracts are a form of lightweight formal specification embedded in the program text. Being executable parts of the code, they encourage programmers to devote proper attention to specifications, and help maintain consistency between specification and implementation as the program evolves. The present study investigates how contracts are used in the practice of software development. Based on an extensive empirical analysis of 21 contract-equipped Eiffel, C#, and Java projects totaling more than 260 million lines of code over 7700 revisions, it explores, among other questions: 1) which kinds of contract elements (preconditions, postconditions, class invariants) are used more often; 2) how contracts evolve over time; 3) the relationship between implementation changes and contract changes; and 4) the role of inheritance in the process. It has found, among other results, that: the percentage of program elements that include contracts is above 33% for most projects and tends to be stable over time; there is no strong preference for a certain type of contract element; contracts are quite stable compared to implementations; and inheritance does not significantly affect qualitative trends of contract usage.

@InProceedings{EFNPM-FM14-Coat,
  author = {H.-Christian Estler and Carlo A. Furia and Martin Nordio and Marco Piccioni and Bertrand Meyer},
  title = {Contracts in Practice},
  booktitle = {Proceedings of the 19th International Symposium on Formal Methods (FM)},
  year = {2014},
  editor = {Cliff B. Jones and Pekka Pihlajasaari and Jun Sun},
  series = {Lecture Notes in Computer Science},
  pages = {230--246},
  volume = {8442},
  month = {May},
  publisher = {Springer},
  acceptancerate = {28\%}
}

Yu Pei, Carlo A. Furia, Martin Nordio, and Bertrand Meyer.
Automatic Program Repair by Fixing Contracts.
In Proceedings of the 17th International Conference on Fundamental Approaches to Software Engineering (FASE).
Lecture Notes in Computer Science, 8411:246–260, Springer, April 2014.

Abstract
While most debugging techniques focus on patching implementations, there are bugs whose most appropriate corrections consist in fixing the specification to prevent invalid executions—such as to define the correct input domain of a function. In this paper, we present a fully automatic technique that fixes bugs by proposing changes to contracts (simple executable specification elements such as pre- and postconditions). The technique relies on dynamic analysis to understand the source of buggy behavior, to infer changes to the contracts that emend the bugs, and to validate the changes against general usage. We have implemented the technique in a tool called SpeciFix, which works on programs written in Eiffel, and evaluated it on 44 bugs found in standard data-structure libraries. Manual analysis by human programmers found that SpeciFix suggested repairs that are deployable for 25% of the faults; in most cases, these contract repairs were preferred over fixes for the same bugs that change the implementation.

  • The paper in PDF format (updated, improved, and extended text with appendix) and its DOI.
  • The SpeciFix project page.
@InProceedings{PFNM-FASE14,
  author = {Yu Pei and Carlo A. Furia and Martin Nordio and Bertrand Meyer},
  title = {Automatic Program Repair by Fixing Contracts},
  booktitle = {Proceedings of the 17th International Conference on Fundamental Approaches to Software Engineering (FASE)},
  year = {2014},
  editor = {Stefania Gnesi and Arend Rensink},
  series = {Lecture Notes in Computer Science},
  pages = {246--260},
  volume = {8411},
  month = {April},
  publisher = {Springer},
  acceptancerate = {23\%}
}

Julian Tschannen, Carlo A. Furia, and Martin Nordio.
AutoProof Meets Some Verification Challenges.
International Journal on Software Tools for Technology Transfer, 17(6):745–755.
Springer, October 2015.

Abstract
AutoProof is an automatic verifier for functional properties of programs written in Eiffel. This paper illustrates some of AutoProof's capabilities when tackling the three challenges of the VerifyThis verification competition held at FM 2012, as well as on three other problems proposed in related events. AutoProof's design focuses on making it practically applicable with reduced user effort. Tackling the challenges demonstrates to what extent this design goal is met in the current implementation: while some of AutoProof's current limitations prevent us from verifying the complete specification of the prefix sum and binary search tree algorithms, we can still prove some partial properties on interesting special cases, but with the advantage of requiring little or no specification.

@Article{TFN-STTT14,
  author = {Julian Tschannen and Carlo A. Furia and Martin Nordio},
  title = {{A}uto{P}roof Meets Some Verification Challenges},
  journal = {International Journal on Software Tools for Technology Transfer},
  year = {2015},
  volume = {17},
  number = {6}, 
  pages = {745--755},
  month = {October},
  publisher = {Springer},
  note = {Online since February 2014. Special section on the VerifyThis 2012 Verification Competition.}
}

Carlo A. Furia, Bertrand Meyer, and Sergey Velder.
Loop Invariants: Analysis, Classification, and Examples.
ACM Computing Surveys, 46(3), Article 34, January 2014.

Abstract
Software verification has emerged as a key concern for ensuring the continued progress of information technology. Full verification generally requires, as a crucial step, equipping each loop with a "loop invariant". Beyond their role in verification, loop invariants help program understanding by providing fundamental insights into the nature of algorithms. In practice, finding sound and useful invariants remains a challenge. Fortunately, many invariants seem intuitively to exhibit a common flavor. Understanding these fundamental invariant patterns could therefore provide help for understanding and verifying a large variety of programs.
We performed a systematic identification, validation, and classification of loop invariants over a range of fundamental algorithms from diverse areas of computer science. This article analyzes the patterns, as uncovered in this study, governing how invariants are derived from postconditions; it proposes a taxonomy of invariants according to these patterns, and presents its application to the algorithms reviewed. The discussion also shows the need for high-level specifications based on "domain theory". It describes how the invariants and the corresponding algorithms have been mechanically verified using an automated program prover; the proof source files are available. The contributions also include suggestions for invariant inference and for model-based specification.

@Article{FMV-CSUR14,
  author = {Carlo A. Furia and Bertrand Meyer and Sergey Velder},
  title = {Loop Invariants: Analysis, Classification, and Examples},
  journal = {ACM Computing Surveys},
  volume = {46},
  number = {3},
  year = {2014},
  pages = {Article 34},
  publisher = {ACM},
  month = {January},
}

Hans-Christian Estler, Martin Nordio, Carlo A. Furia, Bertrand Meyer, and Johannes Schneider.
Agile vs. Structured Distributed Software Development: A Case Study.
Empirical Software Engineering, 19(5):1197–1224.
Springer, October 2014.

Abstract
In globally distributed software development, does it matter being agile rather than structured? To answer this question, this paper presents an extensive case study that compares agile (Scrum, XP, etc.) vs. structured (RUP, waterfall) processes to determine if the choice of process impacts aspects such as the overall success and economic savings of distributed projects, the motivation of the development teams, the amount of communication required during development, and the emergence of critical issues.
The case study includes data from 66 projects developed in Europe, Asia, and the Americas. The results show no significant difference between the outcome of projects following agile processes and structured processes, suggesting that agile and structured processes can be equally effective for globally distributed development. The paper also discusses several qualitative aspects of distributed software development such as the advantages of nearshore vs. offshore, the preferred communication patterns, and the effects on project quality.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • This paper is an extended version of the conference paper with the same title; it was invited to a special issue of the Empirical Software Engineering journal with the best papers from ICGSE 2012.
@Article{ENFMS-ESE13,
  author = {Hans-Christian Estler and Martin Nordio and Carlo A. Furia and Bertrand Meyer and Johannes Schneider},
  title = {Agile vs.\ structured distributed software development: A case study},
  journal = {Empirical Software Engineering},
  year = {2014},
  volume = {19},
  number = {5}, 
  pages = {1197--1224},
  month = {October},
  publisher = {Springer},
  note = {Online since August 2013}
}

Nadia Polikarpova, Carlo A. Furia, and Scott West.
To Run What No One Has Run Before: Executing an Intermediate Verification Language.
In Proceedings of the 4th International Conference on Runtime Verification (RV).
Lecture Notes in Computer Science, 8174:251–268, Springer, September 2013.

Abstract
When program verification fails, it is often hard to understand what went wrong in the absence of concrete executions that expose parts of the implementation or specification responsible for the failure. Automatic generation of such tests would require "executing" the complex specifications typically used for verification (with unbounded quantification and other expressive constructs), something beyond the capabilities of standard testing tools.
This paper presents a technique to automatically generate executions of programs annotated with complex specifications, and its implementation for the Boogie intermediate verification language. Our approach combines symbolic execution and SMT constraint solving to generate small tests that are easy to read and understand. The evaluation on several program verification examples demonstrates that our test case generation technique can help understand failed verification attempts in conditions where traditional testing is not applicable, thus making formal verification techniques easier to use in practice.

@InProceedings{PFW-RV13,
  author = {Nadia Polikarpova and Carlo A. Furia and Scott West},
  title = {To Run What No One Has Run Before: Executing an Intermediate Verification Language},
  booktitle = {Proceedings of the 4th International Conference on Runtime Verification (RV)},
  year = {2013},
  editor = {Axel Legay and Saddek Bensalem},
  series = {Lecture Notes in Computer Science},
  pages = {251--268},
  volume = {8174},
  month = {September},
  publisher = {Springer},
  acceptancerate = {41\%}
}

Marco Piccioni, Carlo A. Furia, and Bertrand Meyer.
An Empirical Study of API Usability.
In Proceedings of the 7th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM).
Pgg. 5–14, IEEE Computer Society, October 2013.

Abstract
Modern software development extensively involves reusing library components accessed through their Application Programming Interfaces (APIs). Usability is therefore a fundamental goal of API design, but rigorous empirical studies of API usability are still relatively uncommon. In this paper, we present the design of an API usability study which combines interview questions, based on the cognitive dimensions framework, with systematic observations of programmer behavior while solving programming tasks, based on "tokens". We also discuss the implementation of the study to assess the usability of a persistence library API (offering functionalities such as storing objects into relational databases). The study involved 25 programmers (including students, researchers, and professionals), and provided additional evidence to some critical features evidenced by related studies, such as the difficulty of finding good names for API features and of discovering relations between API types. It also discovered new issues relevant to API design, such as the impact of flexibility, and confirmed the crucial importance of accurate documentation for usability.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
@InProceedings{PFM-ESEM13,
  author = {Marco Piccioni and Carlo A. Furia and Bertrand Meyer},
  title = {An Empirical Study of {API} Usability},
  booktitle = {Proceedings of the 7th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)},
  editor    = {Forrest Shull and Carolyn Seaman and Guilherme Horta Travassos and Oscar Pastor and Clemente Izurieta and Emilia Mendes and Lorin Hochstein},
  pages = {5--14},
  year = {2013},
  month = {October},
  publisher = {IEEE Computer Society},
  acceptancerate = {27\%}
}

Julian Tschannen, Carlo A. Furia, Martin Nordio, and Bertrand Meyer.
Program Checking With Less Hassle.
In Proceedings of the 5th Working Conference on Verified Software: Theories, Tools and Experiments (VSTTE 2013).
Lecture Notes in Computer Science, 8164:149–169, Springer, 2014.

Abstract
The simple and often imprecise specifications that programmers may write are a significant limit to a wider application of rigorous program verification techniques. Part of the reason why non-specialists find writing good specification hard is that, when verification fails, they receive little guidance as to what the causes might be, such as implementation errors or inaccurate specifications. To address these limitations, this paper presents two-step verification, a technique that combines implicit specifications, inlining, and loop unrolling to provide improved user feedback when verification fails. Two-step verification performs two independent verification attempts for each program element: one using standard modular reasoning, and another one after inlining and unrolling; comparing the outcomes of the two steps suggests which elements should be improved. Two-step verification is implemented in AutoProof, our static verifier for Eiffel programs integrated in EVE (the Eiffel Verification Environment) and available online.

@InProceedings{TFNM-VSTTE13,
  author = {Julian Tschannen and Carlo A. Furia and Martin Nordio and Bertrand Meyer},
  title = {Program Checking With Less Hassle},
  booktitle = {Proceedings of the 5th Working Conference on Verified Software: Theories, Tools and Experiments (VSTTE 2013)},
  year = {2014},
  editor = {Ernie Cohen and Andrey Rybalchenko},
  series = {Lecture Notes in Computer Science},
  pages = {149--169},
  volume = {8164},
  publisher = {Springer},
  acceptancerate = {48\%}
}

H.-Christian Estler, Martin Nordio, Carlo A. Furia, and Bertrand Meyer.
Collaborative Debugging.
In Proceedings of the 8th International Conference on Global Software Engineering (ICGSE).
Pgg. 110–119, IEEE Computer Society, August 2013.

Abstract
Debugging—the process of finding and correcting programming mistakes—faces too the challenges of distributed and collaborative development. The debugging tools commonly used by programmers are integrated into traditional development environments such as Eclipse or VisualStudio, and hence do not offer specific features for collaboration or remote shared usage. In this paper, we describe CDB, a debugging technique and integrated tool specifically designed to support effective collaboration among developers during shared debugging sessions. We also discuss the design and results of an empirical study aimed at identifying features that can ameliorate the effectiveness of collaborative debugging processes; and at evaluating the usefulness of our CDB collaborative debugging approach. The study suggests that CDB's collaboration features are often perceived as important for effective debugging; and can improve the overall debugging experience in collaborative settings.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • This paper won the best paper award at ICGSE 2013.
  • CDB is part of the CloudStudio web-based IDE.
@InProceedings{ENFM-ICGSE13,
  author = {H.-Christian Estler and Martin Nordio and Carlo A. Furia and Bertrand Meyer},
  title = {Collaborative Debugging},
  booktitle = {Proceedings of the 8th International Conference on Global Software Engineering (ICGSE)},
  editor    = {Marcelo Cataldo and J{\"u}rgen M{\"u}nch and Filippo Lanubile},
  pages = {110--119},
  year = {2013},
  month = {August},
  publisher = {IEEE Computer Society},
  acceptancerate = {34\%},
  note = {Best paper award}
}

Marco Trudel, Carlo A. Furia, Martin Nordio, and Bertrand Meyer.
Really Automatic Scalable Object-Oriented Reengineering.
In Proceedings of the 27th European Conference on Object-Oriented Programming (ECOOP).
Lecture Notes in Computer Science, 7920:477–501, Springer, July 2013.

Abstract
Even when implemented in a purely procedural programming language, properly designed programs possess elements of good design that are expressible through object-oriented constructs and concepts. For example, placing structured types and the procedures operating on them together in the same module achieves a weak form of encapsulation that reduces inter-module coupling. This paper presents a novel technique, and a supporting tool AutoOO, that extracts such implicit design elements from C applications and uses them to build reengineered object-oriented programs. The technique is completely automatic: users only provide a source C program, and the tool produces an object-oriented application written in Eiffel with the same input/output behavior as the source. An extensive evaluation on 10 open-source programs (including the editor vim and the math library libgsl) demonstrates that our technique works on applications of significant size and builds reengineered programs exhibiting elements of good object-oriented design, such as low coupling and high cohesion of classes, and proper encapsulation. The reengineered programs also leverage advanced features such as inheritance, contracts, and exceptions to achieve a better usability and a clearer design. The tool AutoOO is freely available for download.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • AutoOO has been successfully evaluated by the ECOOP artifact evaluation committee and found to meet expectations.
  • It has also been invited to ECOOP's Demonstrations satellite event.
  • AutoOO is freely available for download as part of the C2Eif project.
@InProceedings{TFNM-ECOOP13,
  author = {Marco Trudel and Carlo A. Furia and Martin Nordio and Bertrand Meyer},
  title = {Really Automatic Scalable Object-Oriented Reengineering},
  booktitle = {Proceedings of the 27th European Conference on Object-Oriented Programming (ECOOP)},
  year = {2013},
  editor = {Giuseppe Castagna},
  series = {Lecture Notes in Computer Science},
  pages = {477--501},
  volume = {7920},
  month = {July},
  publisher = {Springer},
  acceptancerate = {25\%}
}

H.-Christian Estler, Martin Nordio, Carlo A. Furia, and Bertrand Meyer.
Unifying Configuration Management with Merge Conflict Detection and Awareness Systems.
In Proceedings of the 22nd Australasian Software Engineering Conference (ASWEC).
Pgg. 201–210, IEEE Computer Society, June 2013.

Abstract
As software development becomes an increasingly collaborative effort, traditional development tools have to be extended to support seamless collaboration while minimizing the chances of conflicts. This paper describes CloudStudio, a collaboration framework that integrates a fine-grained software configuration management model and a real-time awareness system. CloudStudio's configuration management operates transparently by automatically sharing the changes of developers working on the same project; the real-time awareness system allows for dynamic views on the project selectively including or excluding other developers' changes. With this tight integration, conflicts are prevented in many cases, while leaving individual developers free to experiment without blocking others. The paper also describes a freely available prototype web-based implementation of CloudStudio and a case study that demonstrates the usability of the approach for collaborative software development.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • This technical report describes a preliminary version of the work; it is superseded by this conference paper.
  • The CloudStudio web-based IDE is available online.
@InProceedings{ENFM-ASWEC13,
  author = {H.-Christian Estler and Martin Nordio and Carlo A. Furia and Bertrand Meyer},
  title = {Unifying Configuration Management with Merge Conflict Detection and Awareness Systems},
  booktitle = {Proceedings of the 22nd Australasian Software Engineering Conference (ASWEC)},
  editor    = {Jens Dietrich and James Noble},
  pages = {201--210},
  year = {2013},
  month = {June},
  publisher = {IEEE Computer Society},
  acceptancerate = {39\%}
}

Nadia Polikarpova, Carlo A. Furia, Yu Pei, Yi Wei, and Bertrand Meyer.
What Good Are Strong Specifications?.
In Proceedings of the 35th International Conference on Software Engineering (ICSE).
Pgg. 257–266, ACM, May 2013.

Abstract
Experience with lightweight formal methods suggests that programmers are willing to write specification if it brings tangible benefits to their usual development activities. This paper considers stronger specifications and studies whether they can be deployed as an incremental practice that brings additional benefits without being unacceptably expensive. We introduce a methodology that extends Design by Contract to write strong specifications of functional properties in the form of preconditions, postconditions, and invariants. The methodology aims at being palatable to developers who are not fluent in formal techniques but are comfortable with writing simple specifications. We evaluate the cost and the benefits of using strong specifications by applying the methodology to testing data structure implementations written in Eiffel and C#. In our extensive experiments, testing against strong specifications detects twice as many bugs as standard contracts, with a reasonable overhead in terms of annotation burden and run-time performance while testing. In the wide spectrum of formal techniques for software quality, testing against strong specifications lies in a "sweet spot" with a favorable benefit to effort ratio.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • An extended version of the paper is available as technical report.
  • A repository with supplementary material of the case study. The instrumentation tool runmbc (pun unintended) used for the paper's experiments is available as part of EiffelBase2.
  • Nadia also prepared a stylish teaser video.
@InProceedings{PFPWM-ICSE13,
  author = {Nadia Polikarpova and Carlo A. Furia and Yu Pei and Yi Wei and Bertrand Meyer},
  title = {What Good Are Strong Specifications?},
  booktitle = {Proceedings of the 35th International Conference on Software Engineering (ICSE)},
  pages = {257--266},
  year = {2013},
  editor = {David Notkin and Betty H.C. Cheng and Klaus Pohl},
  month = {May},
  publisher = {ACM},
  acceptancerate = {18\%}
}

Martin Nordio, Cristiano Calcagno, and Carlo A. Furia.
Javanni: A Verifier for JavaScript.
In Proceedings of the 16th International Conference on Fundamental Approaches to Software Engineering (FASE).
Lecture Notes in Computer Science, 7793:231–234, Springer, March 2013 (Tool demonstration paper).

Abstract
JavaScript ranks among the most popular programming languages for the web, yet its highly dynamic type system and occasionally unintuitive semantics make programming particularly error-prone. This paper presents Javanni, a verifier for JavaScript programs that can statically detect many common programming errors. Javanni checks the absence of standard type-related errors (such as accessing undefined fields) without requiring user-written annotations, and it can also verify full functional-correctness specifications. Several experiments with JavaScript applications reported in the paper demonstrate that Javanni is flexibly usable on programs with non-trivial specifications. Javanni is available online within the CloudStudio web integrated environment.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • A video demo of the tool.
@InProceedings{NCF13-FASE13,
  author = {Martin Nordio and Cristiano Calcagno and Carlo A. Furia},
  title = {{J}avanni: A Verifier for {J}ava{S}cript},
  booktitle = {Proceedings of the 16th International Conference on Fundamental Approaches to Software Engineering (FASE)},
  year = {2013},
  editor = {Vittorio Cortellessa and Daniel Varr{\'o}},
  series = {Lecture Notes in Computer Science},
  pages = {231--234},
  volume = {7793},
  month = {March},
  publisher = {Springer},
  note = {Tool demonstration paper},
  acceptancerate = {23\%}
}

Carlo A. Furia, Bertrand Meyer, Manuel Oriol, Andrey Tikhomirov, and Yi Wei.
The Search for the Laws of Automatic Random Testing.
In Proceedings of the 28th ACM Symposium on Applied Computing (SAC 2013), Software Verification and Testing Track.
Pgg. 1214–1219, ACM, March 2013.

Abstract
Can one estimate the number of remaining faults in a software system? A credible estimation technique would be immensely useful to project managers as well as customers. It would also be of theoretical interest, as a general law of software engineering. We investigate possible answers in the context of automated random testing, a method that is increasingly accepted as an effective way to discover faults. Our experimental results, derived from best-fit analysis of a variety of mathematical functions, based on a large number of automated tests of library code equipped with automated oracles in the form of contracts, suggest a poly-logarithmic law. Although further confirmation remains necessary on different code bases and testing techniques, we argue that understanding the laws of testing may bring significant benefits for estimating the number of detectable faults and comparing different projects and practices.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • An extended version is available as technical report.
@InProceedings{FMOTW13-SAC13,
  author = {Carlo A. Furia and Bertrand Meyer and Manuel Oriol and Andrey Tikhomirov and Yi Wei},
  title = {The Search for the Laws of Automatic Random Testing},
  booktitle = {Proceedings of the 28th ACM Symposium on Applied Computing (SAC 2013)},
  pages = {1214--1219},
  year = {2013},
  month = {March},
  publisher = {ACM},
  note = {Software Verification and Testing Track},
  acceptancerate = {23\%}
}

Julian Tschannen, Carlo A. Furia, Martin Nordio, and Bertrand Meyer.
Automatic Verification of Advanced Object-Oriented Features: The AutoProof Approach.
In Tools for Practical Software Verification – LASER 2011, International Summer School.
Lecture Notes in Computer Science, 7682:133–155, Springer, 2012.

Abstract
Static program verifiers such as Spec#, Dafny, jStar, and VeriFast define the state of the art in automated functional verification techniques. The next open challenges are to make verification tools usable even by programmers not fluent in formal techniques. This paper discusses some techniques used in AutoProof, a verification tool that translates Eiffel programs to Boogie and uses the Boogie verifier to prove them. In an effort to be usable with real programs, AutoProof fully supports several advanced object-oriented features including polymorphism, inheritance, and function objects. AutoProof also adopts simple strategies to reduce the amount of annotations needed when verifying programs (e.g., frame conditions). The paper illustrates the main features of AutoProof's translation, including some whose implementation is underway, and demonstrates them with examples and a case study.

@InProceedings{TFNM12-LASER11,
  author = {Julian Tschannen and Carlo A. Furia and Martin Nordio and Bertrand Meyer},
  title = {Automatic Verification of Advanced Object-Oriented Features: The {AutoProof} Approach},
  booktitle = {Tools for Practical Software Verification -- LASER 2011, International Summer School},
  pages = {133--155},
  year = {2012},
  editor = {Bertrand Meyer and Martin Nordio},
  volume = {7682},
  series = {Lecture Notes in Computer Science},
  publisher = {Springer}
}

Marco Trudel, Carlo A. Furia, Martin Nordio, Bertrand Meyer, and Manuel Oriol.
C to O-O Translation: Beyond the Easy Stuff.
In Proceedings of the 19th Working Conference on Reverse Engineering (WCRE'12).
Pgg. 19–28, IEEE Computer Society, October 2012.

Abstract
Can we reuse some of the huge code-base developed in C to take advantage of modern programming language features such as type safety, object-orientation, and contracts? This paper presents a source-to-source translation of C code into Eiffel, a modern object-oriented programming language, and the supporting tool C2Eif. The translation is completely automatic and supports the entire C language (ANSI, as well as many GNU C Compiler extensions, through CIL) as used in practice, including its usage of native system libraries and inlined assembly code. Our experiments show that C2Eif can handle C applications and libraries of significant size (such as vim and libgsl), as well as challenging benchmarks such as the GCC torture tests. The produced Eiffel code is functionally equivalent to the original C code, and takes advantage of some of Eiffel's features to produce safe and easy-to-debug translations.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • An extended version of the paper is available as technical report.
  • The C2Eif project page.
  • Marco Trudel's spinoff company mtSystems offers C translation services based on some of the techniques first described in this paper (as well as in the companion papers WCRE'12 tool demo and ECOOP'13).
@InProceedings{TFNMO12-WCRE12,
  author = {Marco Trudel and Carlo A. Furia and Martin Nordio and Bertrand Meyer and Manuel Oriol},
  title = {{C} to {O-O} Translation: Beyond the Easy Stuff},
  booktitle = {Proceedings of the 19th Working Conference on Reverse Engineering (WCRE'12)},
  editor    = {Rocco Oliveto and Denys Poshyvanyk and James Cordy and Thomas Dean},
  pages = {19--28},
  year = {2012},
  month = {October},
  publisher = {IEEE Computer Society},
  acceptancerate = {33\%}
}

Marco Trudel, Carlo A. Furia, and Martin Nordio.
Automatic C to O-O Translation with C2Eiffel.
In Proceedings of the 19th Working Conference on Reverse Engineering (WCRE'12).
Tool demonstration paper, pgg. 508–509, IEEE Computer Society, October 2012.

Abstract
C2Eiffel is a fully automatic source-to-source translator of C applications into the Eiffel object-oriented programming language. C2Eiffel supports the complete C language, including function pointers, unrestricted pointer arithmetic and jumps, arbitrary native libraries, and inlined assembly code. It produces readable Eiffel code that behaves as the source C application; it takes advantage of some of Eiffel's object-oriented features to produce translations that are easy to maintain and debug, and often even safer than their sources thanks to stricter correctness checks introduced automatically. Experiments show that C2Eiffel handles C applications of significant size (such as vim and libgsl); it is a fully automatic tool suitable to reuse C code within a high-level object-oriented programming language.

@InProceedings{TFN12-WCRE12-tool,
  author = {Marco Trudel and Carlo A. Furia and Martin Nordio},
  title = {Automatic {C} to {O-O} Translation with {C2Eiffel}},
  booktitle = {Proceedings of the 19th Working Conference on Reverse Engineering (WCRE'12)},
  editor    = {Rocco Oliveto and Denys Poshyvanyk and James Cordy and Thomas Dean},
  pages = {508--509},
  year = {2012},
  month = {October},
  publisher = {IEEE Computer Society},
  note = {Tool demonstration paper}
}

Carlo A. Furia.
A Verifier for Functional Properties of Sequence-Manipulating Programs.
In Proceedings of the 10th International Symposium on Automated Technology for Verification and Analysis (ATVA'12).
Lecture Notes in Computer Science, 7561:183–186, Springer, October 2012.

Abstract
Many programs operate on data structures whose models are sequences, such as arrays, lists, and queues. When specifying and verifying functional properties of such programs, it is convenient to use an assertion language and a reasoning engine that incorporate sequences natively. This paper presents qfis, a program verifier geared to sequence-manipulating programs. qfis is a command-line tool that inputs annotated programs, generates the verification conditions that establish their correctness, tries to discharge them by calls to the SMT-solver CVC3, and reports the outcome back to the user. qfis can be used directly or as a back-end of more complex programming languages.

  • The tool paper in PDF format (updated, improved, and extended text) and its DOI.
  • The QFIS verifier is available for download on this page, which also includes a user manual and a video demo.
@InProceedings{Fur12-ATVA12,
  author = {Carlo A. Furia},
  title = {A Verifier for Functional Properties of Sequence-Manipulating Programs},
  booktitle = {Proceedings of the 10th International Symposium on Automated Technology for Verification and Analysis (ATVA'12)},
  pages = {183--186},
  year = {2012},
  editor = {Supratik Chakraborty and Madhavan Mukund},
  volume = {7561},
  series = {Lecture Notes in Computer Science},
  month = {October},
  publisher = {Springer},
  note = {Tool paper},
  acceptancerate = {32\%}
}

Carlo A. Furia, Dino Mandrioli, Angelo Morzenti, and Matteo Rossi.
Modeling Time in Computing.
Monographs in Theoretical Computer Science. An EATCS series. Springer, 2012.

From the Preface
Time has become a part of computing, where it has claimed its own models. This book summarizes several decades of research on developing, analyzing, and applying time models to computing and, more generally, engineering.
Every researcher contributing to such a burgeoning field is forced to consider several available approaches, to understand what they have in common and how they differ. This continued effort prompts a systematic work of comparison, classification, and assessment of the diverse models and methods. This book picks up this call and tries to put it into practice.

  1. It's about time:
    an introduction and overview of the book.
  2. Languages and interpretations:
    what is a modeling language; and a crash course on logic.
  3. Dimensions of the time modeling problem:
    fundamental features of time modeling: discrete vs. dense time, deterministic vs. nondeterministic, time advancement, etc.
  4. Dynamical systems:
    the attributes of time in classic dynamical system theory.
  5. Time in hardware modeling and design:
    how time is modeled from transistors to asynchronous and synchronous logic circuits.
  6. Time in the analysis of algorithms:
    measures of time in computational complexity and automata theory.
  7. Synchronous abstract machines:
    transition systems, finite-state automata, Statecharts, timed automata, etc.
  8. Asynchronous abstract machines: Petri nets:
    Petri nets: untimed, timed, and stochastic.
  9. Logic-based formalisms:
    temporal logics: linear, branching, metric, interval-based, explicit-time, probabilistic, etc.
  10. Algebraic formalisms:
    Communicating Sequential Processes: untimed, timed, and stochastic.
  11. Dual-language approaches:
    the model-checking paradigm, TTM/RTTL, and TRIO-Petri nets.
  12. Time is up:
    conclusive remarks.
  • The book is available online (in electronic and printed versions) from Springer's website.
  • From the same page, you can freely download the detailed table of contents and some excerpts.
@Book{FMMR-TimeBook-12,
  author = {Carlo A. Furia and Dino Mandrioli and Angelo Morzenti and Matteo Rossi},
  title = {Modeling Time in Computing},
  publisher = {Springer},
  year = {2012},
  series = {Monographs in Theoretical Computer Science. An EATCS series},
  isbn = {978-3-642-32331-7}
}

Carlo A. Furia and Sebastian Nanz.
TOOLS Europe 2012 Special Section (editorial).
Journal of Object Technology, Volume 12(3), August 2013.

@article{FN-JOT-TOOLSEurope12,
  author = {Carlo A. Furia and Sebastian Nanz},
  title = {{TOOLS Europe} 2012 Special Section},
  journal = {Journal of Object Technology},
  volume = {12},
  number = {3},
  month = {August},
  year = {2013},
  note = {(editorial)}
}

Carlo A. Furia and Sebastian Nanz, editors.
Objects, Models, Components, Patterns – 50th International Conference, TOOLS 2012, Prague, Czech Republic, May 29–31, 2012. Proceedings.
Lecture Notes in Computer Science 7304, Springer, 2012.

From the Preface
Started in 1989, the TOOLS conference series has played a major role in the development of object technology and, with its emphasis on practically useful results, has contributed to making it mainstream and ubiquitous. This year's 50th edition of the International Conference on Objects, Models, Components, Patterns (TOOLS Europe 2012) appropriately celebrated this "triumph of objects": object technology is now commonplace.

@proceedings{FN-TOOLSEurope12,
  editor    = {Carlo A. Furia and
               Sebastian Nanz},
  title     = {Objects, Models, Components, Patterns -- 50th International
               Conference, TOOLS 2012, Prague, Czech Republic, May 29--31,
               2012. Proceedings},
  booktitle = {TOOLS (50)},
  publisher = {Springer},
  series    = {Lecture Notes in Computer Science},
  volume    = {7304},
  year      = {2012}
}

Carlo A. Furia and Paola Spoletini.
Automata-based Verification of Linear Temporal Logic Models with Bounded Variability.
In Proceedings of the 19th International Symposium on Temporal Representation and Reasoning (TIME'12).
Pgg. 89–96, IEEE Computer Society, September 2012.

Abstract
A model has variability bounded by v/k when the state changes at most v times over any linear interval containing k time instants. When interpreted over models with bounded variability, specification formulae that contain redundant metric information—through the usage of next operators—can be simplified without affecting their validity. This paper shows how to harness this simplification in practice: we present a translation of LTL into Büchi automata that removes redundant metric information, hence makes for more efficient verification over models with bounded variability. To show the feasibility of the approach, we also implement a proof-of-concept translation in ProMeLa and verify it using the Spin off-the-shelf model-checker.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
@InProceedings{FS12-TIME12,
  author = {Carlo A. Furia and Paola Spoletini},
  title = {Automata-based Verification of Linear Temporal Logic Models with Bounded Variability},
  booktitle = {Proceedings of the 19th International Symposium on Temporal Representation and Reasoning (TIME'12)},
  editor    = {Ben Moszkowski and Mark Reynolds and Paolo Terenziani},
  pages = {89--96},
  year = {2012},
  month = {September},
  publisher = {IEEE Computer Society},
  acceptancerate = {50\%}
}

H.-Christian Estler, Martin Nordio, Carlo A. Furia, Bertrand Meyer, and Johannes Schneider.
Agile vs. Structured Distributed Software Development: A Case Study.
In Proceedings of the 7th International Conference on Global Software Engineering (ICGSE'12).
Pgg. 11–20, IEEE Computer Society, August 2012.

Abstract
This paper presents a case study on the impact of development processes on the success of globally distributed software projects. The study compares agile (Scrum, XP, etc.) vs. structured (RUP, waterfall) processes to determine if the choice of process impacts: the overall success and economic savings of distributed projects; the importance customers attribute to projects; the motivation of the development teams; and the amount of real-time or asynchronous communication required during project development.
The case study includes data from 66 projects developed in Europe, Asia, and the Americas. The results show no significant difference between the outcome of projects following agile processes and structured processes, suggesting that agile and structured processes can be equally effective for globally distributed development. The paper also discusses several qualitative aspects of distributed software development such as the advantages of nearshore vs. offshore, the preferred communication patterns, and some common critical aspects.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • This paper won the best paper award at ICGSE 2012.
@InProceedings{ENFMS-ICGSE12,
  author = {H.-Christian Estler and Martin Nordio and Carlo A. Furia and Bertrand Meyer and Johannes Schneider},
  title = {Agile vs.\ Structured Distributed Software Development: A Case Study},
  booktitle = {Proceedings of the 7th International Conference on Global Software Engineering (ICGSE'12)},
  editor    = {Erran Carmel and Rini {van Solingen}},
  pages = {11--20},
  year = {2012},
  month = {August},
  publisher = {IEEE Computer Society},
  acceptancerate = {25\%},
  note = {Best paper award}
}

Yi Wei, Hannes Roth, Carlo A. Furia, Yu Pei, Alexander Horton, Michael Steindorfer, Martin Nordio, and Bertrand Meyer
Stateful Testing: Finding More Errors in Code and Contracts.
In Proceedings of the 26th IEEE/ACM International Conference on Automated Software Engineering (ASE'11).
Pgg. 440–443, ACM, November 2011.

Abstract
Automated random testing has shown to be an effective approach to finding faults but still faces a major unsolved issue: how to generate test inputs diverse enough to find many faults and find them quickly. Stateful testing, the automated testing technique introduced in this article, generates new test cases that improve an existing test suite. The generated test cases are designed to violate the dynamically inferred contracts (invariants) characterizing the existing test suite. As a consequence, they are in a good position to detect new errors, and also to improve the accuracy of the inferred contracts by discovering those that are unsound.
Experiments on 13 data structure classes totalling over 28,000 lines of code demonstrate the effectiveness of stateful testing in improving over the results of long sessions of random testing: stateful testing found 68.4% new faults and improved the accuracy of automatically inferred contracts to over 99%, with just a 7% time overhead.

@InProceedings{WRFPHSNM-ASE11,
  author = {Yi Wei and Hannes Roth and Carlo A. Furia and Yu Pei and Alexander Horton and Michael Steindorfer and Martin Nordio and Bertrand Meyer},
  title = {Stateful Testing: Finding More Errors in Code and Contracts},
  booktitle = {Proceedings of the 26th IEEE/ACM International Conference on Automated Software Engineering (ASE'11)},
  editor    = {Perry Alexander and Corina Pasareanu and John Hosking},
  pages = {440--443},
  year = {2011},
  month = {November},
  publisher = {ACM},
  acceptancerate = {43\%}
}

Yu Pei, Yi Wei, Carlo A. Furia, Martin Nordio, and Bertrand Meyer
Code-Based Automated Program Fixing.
In Proceedings of the 26th IEEE/ACM International Conference on Automated Software Engineering (ASE'11).
Pgg. 392–395, ACM, November 2011.

Abstract
Many programmers, when they encounter an error, would like to have the benefit of automatic fix suggestions—as long as they are, most of the time, adequate. Initial research in this direction has generally limited itself to specific areas, such as data structure classes with carefully designed interfaces, and relied on simple approaches.
To provide high-quality fix suggestions in a broad area of applicability, the present work relies on the presence of contracts in the code, and on the availability of dynamic analysis to gather evidence on the values taken by expressions derived from the program text.
The ideas have been built into the AutoFix-E2 automatic fix generator. Applications of AutoFix-E2 to general-purpose software, such as a library to manipulate documents, show that the approach provides an improvement over previous techniques, in particular purely model-based approaches.

@InProceedings{PWFNM11-ASE11,
  author = {Yu Pei and Yi Wei and Carlo A. Furia and Martin Nordio and Bertrand Meyer},
  title = {Code-Based Automated Program Fixing},
  booktitle = {Proceedings of the 26th IEEE/ACM International Conference on Automated Software Engineering (ASE'11)},
  editor    = {Perry Alexander and Corina Pasareanu and John Hosking},
  pages = {392--395},
  year = {2011},
  month = {November},
  publisher = {ACM},
  acceptancerate = {43\%}
}

Julian Tschannen, Carlo A. Furia, Martin Nordio, and Bertrand Meyer.
Usable Verification of Object-Oriented Programs by Combining Static and Dynamic Techniques.
In 9th International Conference on Software Engineering and Formal Methods (SEFM'11).
Lecture Notes in Computer Science, 7041:382–398, Springer, November 2011.

Abstract
With formal techniques becoming more and more powerful, the next big challenge is making software verification practical and usable. The Eve verification environment contributes to this goal by seamlessly integrating a static prover and an automatic testing tool into a development environment. The paper discusses the general principles behind the integration of heterogeneous verification tools; the peculiar challenges involved in combining static proofs and dynamic testing techniques; and how the combination, implemented in Eve through a blackboard architecture, can improve the user experience with little overhead over usual development practices. Eve is freely available for download.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • The Eve project page.
@InProceedings{TFNM11-SEFM11,
  author = {Julian Tschannen and Carlo A. Furia and Martin Nordio and Bertrand Meyer},
  title = {Usable Verification of Object-Oriented Programs by Combining Static and Dynamic Techniques},
  booktitle = {9th International Conference on Software Engineering and Formal Methods ({SEFM}'11)},
  pages = {382--398},
  year = {2011},
  editor = {Gilles Barthe and Alberto Pardo and Gerardo Schneider},
  volume = {7041},
  series = {Lecture Notes in Computer Science},
  month = {November},
  publisher = {Springer},
  acceptancerate = {28\%}
}

Carlo A. Furia and Paola Spoletini.
On Relaxing Metric Information in Linear Temporal Logic.
In Proceedings of the 18th International Symposium on Temporal Representation and Reasoning (TIME'11).
Pgg. 72–79, IEEE Computer Society, September 2011.

Abstract
Metric LTL formulas rely on the next operator to encode time distances, whereas qualitative LTL formulas use only the until operator. This paper shows how to transform any metric LTL formula M into a qualitative formula Q, such that Q is satisfiable if and only if M is satisfiable over words with variability bounded with respect to the largest distances used in M (i.e., occurrences of next), but the size of Q is independent of such distances. Besides the theoretical interest, this result can help simplify the verification of systems with time-granularity heterogeneity, where large distances are required to express the coarse-grain dynamics in terms of fine-grain time units.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • Some technical details, that are omitted in the paper, can be found in this technical report.
@InProceedings{FS11-TIME11,
  author = {Carlo A. Furia and Paola Spoletini},
  title = {On Relaxing Metric Information in Linear Temporal Logic},
  booktitle = {Proceedings of the 18th International Symposium on Temporal Representation and Reasoning (TIME'11)},
  editor    = {Carlo Combi and Martin Leucker and Frank Wolter},
  pages = {72--79},
  year = {2011},
  month = {September},
  publisher = {IEEE Computer Society},
  acceptancerate = {44\%}
}

Marco Trudel, Manuel Oriol, Carlo A. Furia, and Martin Nordio.
Automated Translation of Java Source Code to Eiffel.
In Objects, Components, Models, Patterns. 49th International Conference, TOOLS Europe 2011.
Lecture Notes in Computer Science, 6705:20–35, Springer, June 2011.

Abstract
Reusability is an important software engineering concept actively advocated for the last forty years. While reusability has been addressed for systems implemented using the same programming language, it does not usually handle interoperability with different programming languages. This paper presents a solution for the reuse of Java code within Eiffel programs based on a source-to-source translation from Java to Eiffel. The paper focuses on the critical aspects of the translation and illustrates them by formal means. The translation is implemented in the freely available tool J2Eif; it provides Eiffel replacements for the components of the Java runtime environment, including Java Native Interface services and reflection mechanisms. Our experiments demonstrate the practical usability of the translation scheme and its implementation, and record the performance slow-down compared to custom-made Eiffel applications: automatic translations of java.util data structures, java.io services, and SWT applications can be re-used as Eiffel programs, with the same functionalities as their original Java implementations.

@InProceedings{TOFN11-TOOLS11,
  author = {Marco Trudel and Manuel Oriol and Carlo A. Furia and Martin Nordio},
  title = {Automated Translation of {J}ava Source Code to {E}iffel},
  booktitle = {Objects, Components, Models, Patterns. 49th International Conference, {TOOLS} Europe 2011},
  pages = {20--35},
  year = {2011},
  editor = {Judith Bishop and Antonio Vallecillo},
  volume = {6705},
  series = {Lecture Notes in Computer Science},
  month = {June},
  publisher = {Springer},
  acceptancerate = {28\%}
}

Yi Wei, Carlo A. Furia, Nikolay Kazmin, and Bertrand Meyer.
Inferring Better Contracts.
In Proceedings of the 33rd International Conference on Software Engineering (ICSE'11).
Pgg. 191–200, ACM, May 2010.

Abstract
Considerable progress has been made towards automatic support for one of the principal techniques available to enhance program reliability: equipping programs with extensive contracts. The results of current contract inference tools are still often unsatisfactory in practice, especially for programmers who already apply some kind of basic Design by Contract discipline, since the inferred contracts tend to be simple assertions—the very ones that programmers find easy to write. We present new, completely automatic inference techniques and a supporting tool, which take advantage of the presence of simple programmer-written contracts in the code to infer sophisticated assertions, involving for example implication and universal quantification.
Applied to a production library of classes covering standard data structures such as linked lists, arrays, stacks, queues and hash tables, the tool is able, entirely automatically, to infer 75% of the complete contracts—contracts yielding the full formal specification of the classes—with very few redundant or irrelevant clauses.

@InProceedings{WFKM11-ICSE11,
  author = {Yi Wei and Carlo A. Furia and Nikolay Kazmin and Bertrand Meyer},
  title = {Inferring Better Contracts},
  booktitle = {Proceedings of the 33rd International Conference on Software Engineering (ICSE'11)},
  editor    = {Richard N. Taylor and Harald Gall and Nenad Medvidovi{\' c}},
  pages = {191--200},
  year = {2011},
  month = {May},
  publisher = {ACM},
  acceptancerate = {14\%}
}

Carlo A. Furia and Matteo Rossi.
A Theory of Sampling for Continuous-Time Metric Temporal Logic.
ACM Transactions on Computational Logic, 12(1):1–40, October 2010.

Abstract
This article revisits the classical notion of sampling in the setting of real-time temporal logics for the modeling and analysis of systems. The relationship between the satisfiability of Metric Temporal Logic (MTL) formulas over continuous-time models and over discrete-time models is studied. It is shown to what extent discrete-time sequences obtained by sampling continuous-time signals capture the semantics of MTL formulas over the two time domains. The main results apply to "flat" formulas that do not nest temporal operators and can be applied to the problem of reducing the verification problem for MTL over continuous-time models to the same problem over discrete-time, resulting in an automated partial practically-efficient discretization technique.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • A previous version is available as technical report.
@Article{FR10-TOCL10,
  author = {Carlo A. Furia and Matteo Rossi},
  title = {A Theory of Sampling for Continuous-Time Metric Temporal Logic},
  journal = {ACM Transactions on Computational Logic},
  volume = {12},
  number = {1},
  year = {2010},
  pages = {1--40},
  publisher = {ACM},
  month = {October},
  note = {Article 8}
}

Carlo A. Furia and Bertrand Meyer.
Inferring Loop Invariants Using Postconditions.
In Fields of Logic and Computation: Essays Dedicated to Yuri Gurevich on the Occasion of His 70th Birthday.
Lecture Notes in Computer Science, 6300:277–300, Springer, August 2010.

Abstract
One of the obstacles in automatic program proving is to obtain suitable loop invariants. The invariant of a loop is a weakened form of its postcondition (the loop's goal, also known as its contract); the present work takes advantage of this observation by using the postcondition as the basis for invariant inference, using various heuristics such as "uncoupling" which prove useful in many important algorithms. Thanks to these heuristics, the technique is able to infer invariants for a large variety of loop examples. We present the theory behind the technique, its implementation (freely available for download and currently relying on Microsoft Research's Boogie tool), and the results obtained.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • A previous version is available as technical report.
@InProceedings{FM10-YG70-post,
  author = {Carlo A. Furia and Bertrand Meyer},
  title = {Inferring Loop Invariants Using Postconditions},
  booktitle = {Fields of Logic and Computation: Essays Dedicated to Yuri Gurevich on the Occasion of His 70th Birthday},
  pages = {277--300},
  year = {2010},
  editor = {Andreas Blass and Nachum Dershowitz and Wolfgang Reisig},
  volume = {6300},
  series = {Lecture Notes in Computer Science},
  month = {August},
  publisher = {Springer}
}

Carlo A. Furia.
What's Decidable About Sequences?
In Proceedings of the 8th International Symposium on Automated Technology for Verification and Analysis (ATVA'10).
Lecture Notes in Computer Science, 6252:128–142, Springer-Verlag, September 2010.

Abstract
We present a first-order theory of (finite) sequences with integer elements, Presburger arithmetic, and regularity constraints, which can model significant properties of data structures such as lists and queues. We give a decision procedure for the quantifier-free fragment, based on an encoding into the first-order theory of concatenation; the procedure has PSPACE complexity. The quantifier-free fragment of the theory of sequences can express properties such as sortedness and injectivity, as well as Boolean combinations of periodic and arithmetic facts relating the elements of the sequence and their positions (e.g., "for all even i's, the element at position i has value i+3 or 2i"). The resulting expressive power is orthogonal to that of the most expressive decidable logics for arrays. Some examples demonstrate that the fragment is also suitable to reason about sequence-manipulating programs within the standard framework of axiomatic semantics.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
  • Some technical details, that are omitted in the paper, can be found in this technical report.
  • In Section 4 of the published paper, the verification condition for the Mergesort example is incorrect; the version available here corrects the lapses. The rest of the paper is, in any case, unaffected.
  • QFIS is a prototype tool for the theory of sequences described in the paper. You can download QFIS and read its user manual.
@InProceedings{Fur10-ATVA10,
  author = {Carlo A. Furia},
  title = {What's Decidable About Sequences?},
  booktitle = {Proceedings of the 8th International Symposium on Automated Technology for Verification and Analysis (ATVA'10)},
  pages = {128--142},
  year = {2010},
  editor = {Ahmed Bouajjani and Wei-Ngan Chin},
  volume = {6252},
  series = {Lecture Notes in Computer Science},
  month = {September},
  publisher = {Springer},
  acceptancerate = {41\%}
}

Dino Mandrioli, Stephen Fickas, Carlo A. Furia, Mehdi Jazayeri, Matteo Rossi, and Michal Young.
SCORE: the first student contest on software engineering.
SIGSOFT Software Engineering Notes 35(4):24–30, July 2010.

Abstract
The Student Contest on Software Engineering (SCORE), organized for the first time in conjunction with the International Conference on Software Engineering (ICSE) 2009, attracted 50 student teams from around the world, produced an impressive and varied set of projects, and earned appreciative comments from participants and even from teams who chose not to submit their results to the competition. It was a remarkable success, but not without problems and setbacks. In this article we explain the objectives, constraints, and design philosophy of SCORE, particularly as they distinguish it from the tradition of computer science contests focused more narrowly on programming. We also recount key approaches taken to design and management of this novel kind of contest, the difficulties we met (some still outstanding), and the lessons learned.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
@Article{MFFJRY10-SEN10,
  author = {Dino Mandrioli and Stephen Fickas and Carlo A. Furia and Mehdi Jazayeri and Matteo Rossi and Michal Young},
  title = {{SCORE}: the first student contest on software engineering},
  journal = {SIGSOFT Software Engineering Notes},
  year = {2010},
  volume = {35},
  number = {4},
  pages = {24--30},
  month = {July}
}

Nadia Polikarpova, Carlo A. Furia, and Bertrand Meyer.
Specifying Reusable Components.
In Proceedings of the 3rd International Conference on Verified Software: Theories, Tools, and Experiments (VSTTE'10).
Lecture Notes in Computer Science, 6217:127–141, Springer-Verlag, August 2010.

Abstract
Reusable software components need expressive specifications. This paper outlines a rigorous foundation of model-based contracts, a method to equip classes with strong contracts that support accurate design, implementation, and formal verification of reusable components. Model-based contracts conservatively extend the classic Design by Contract approach with a notion of model, which underpins the precise definitions of such concepts as abstract object equivalence and specification completeness. Experiments applying model-based contracts to libraries of data structures suggest that the method enables accurate specification of practical software.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.

Notes
Some technical details, that are omitted in the paper, can be found in this technical report.

@InProceedings{PFM10-VSTTE10,
  author = {Nadia Polikarpova and Carlo A. Furia and Bertrand Meyer},
  title = {Specifying Reusable Components},
  booktitle = {Proceedings of the 3rd International Conference on Verified Software: Theories, Tools, and Experiments (VSTTE'10)},
  pages = {127--141},
  year = {2010},
  editor = {Gary T. Leavens and Peter O'Hearn and Sriram Rajamani},
  volume = {6217},
  series = {Lecture Notes in Computer Science},
  month = {August},
  publisher = {Springer},
  acceptancerate = {36\%}
}

Yi Wei, Yu Pei, Carlo A. Furia, Lucas S. Silva, Stefan Buchholz, Bertrand Meyer, and Andreas Zeller.
Automated Fixing of Programs with Contracts.
In Proceedings of the 19th International Symposium on Software Testing and Analysis (ISSTA'10).
Pgg. 61–72, ACM, July 2010.

Abstract
In program debugging, finding a failing run is only the first step; what about correcting the fault? Can we automate the second task as well as the first? The AutoFix-E tool automatically generates and validates fixes for software faults. The key insights behind AutoFix-E are to rely on contracts present in the software to ensure that the proposed fixes are semantically sound, and on state diagrams using an abstract notion of state based on the boolean queries of a class.
Out of 42 faults found by an automatic testing tool in two widely used Eiffel libraries, AutoFix-E proposes successful fixes for 16 faults. Submitting some of these faults to experts shows that several of the proposed fixes are identical or close to fixes proposed by humans.

@InProceedings{WPFSBMZ10-ISSTA10,
  author = {Yi Wei and Yu Pei and Carlo A. Furia and Lucas S. Silva and Stefan Buchholz and Bertrand Meyer and Andreas Zeller},
  title = {Automated Fixing of Programs with Contracts},
  booktitle = {Proceedings of the 19th International Symposium on Software Testing and Analysis (ISSTA'10)},
  editor    = {Paolo Tonella and Alessandro Orso},
  pages = {61--72},
  year = {2010},
  month = {July},
  publisher = {ACM},
  acceptancerate = {23\%}
}

Carlo A. Furia, Dino Mandrioli, Angelo Morzenti, and Matteo Rossi.
Modeling Time in Computing: a taxonomy and a comparative survey.
ACM Computing Surveys, 42(2):1–59, February 2010.

Abstract
The increasing relevance of areas such as real-time and embedded systems, pervasive computing, hybrid systems control, and biological and social systems modeling is bringing a growing attention to the temporal aspects of computing, not only in the computer science domain, but also in more traditional fields of engineering.
This article surveys various approaches to the formal modeling and analysis of the temporal features of computer-based systems, with a level of detail that is also suitable for nonspecialists. In doing so, it provides a unifying framework, rather than just a comprehensive list of formalisms.
The article first lays out some key dimensions along which the various formalisms can be evaluated and compared. Then, a significant sample of formalisms for time modeling in computing are presented and discussed according to these dimensions. The adopted perspective is, to some extent, historical, going from "traditional" models and formalisms to more modern ones.

@Article{FMMR10-CSUR10,
  author = {Carlo A. Furia and Dino Mandrioli and Angelo Morzenti and Matteo Rossi},
  title = {Modeling Time in Computing: a taxonomy and a comparative survey},
  journal = {ACM Computing Surveys},
  volume = {42},
  number = {2},
  year = {2010},
  pages = {1--59},
  publisher = {ACM},
  month = {February},
  note = {Article 6}
}

Luca Cavallaro, Elisabetta Di Nitto, Carlo A. Furia, and Matteo Pradella.
A Tile-based Approach for Self-assembling Service Compositions.
In Proceedings of the 15th IEEE International Conference on Engineering of Complex Computer Systems (ICECCS'10).
Pgg. 43–52, IEEE, March 2010.

Abstract
This paper presents a novel approach to the design of self-adaptive service-oriented applications based on a new model called service tiles. The approach allows designers to develop a service-oriented system by building an assembly of component services that accomplishes the given goal. The assembly is computed automatically starting from the specification of a subset of the whole system, a few constraints, and the goals the application should fulfill. An application designed according to the service-tile model can also dynamically self-adapt by replacing, in part or entirely, services in the assembly whenever they fail or the application context changes. The service-tile design technique has been implemented in a prototype and some experiments with several examples demonstrate the feasibility of the approach and its practical efficiency.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
@InProceedings{CDFP10-ICECCS10,
  author = {Luca Cavallaro and Elisabetta {Di Nitto} and Carlo A. Furia and Matteo Pradella},
  title = {A Tile-based Approach for Self-assembling Service Compositions},
  booktitle = {Proceedings of the 15th IEEE International Conference on Engineering of Complex Computer Systems (ICECCS'10)},
  pages = {43--52},
  year = {2010},
  month = {March},
  publisher = {IEEE},
  acceptancerate = {23\%}
}

Silvia Bindelli, Elisabetta Di Nitto, Carlo A. Furia, and Matteo Rossi.
Using Compositionality to Formally Model and Analyze Systems Built of a High Number of Components.
In Proceedings of the 15th IEEE International Conference on Engineering of Complex Computer Systems (ICECCS'10).
Pgg. 85–94, IEEE, March 2010.

Abstract
When dependability of systems with a large number of components is a concern, being able to model and analyze their properties, especially non-functional ones, in a formal and automated way becomes essential. Often, however, the application of formal methods and automated reasoning is seen by practitioners as complex and time consuming. Compositional techniques can help modify this belief.
In this paper we show how a compositional modeling and verification technique can be applied to the analysis of distributed systems with numerous interacting nodes. We automate the proof by exploiting a SAT-based tool. We demonstrate the validity of the resulting approach by applying it to an autonomic service-based system that manages, in a coordinated peer-to-peer manner, electricity consumption in a geographical area. In particular, we show that in this case the time needed for performing the proof is remarkably shorter than in the case in which we adopt a non-compositional approach.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
@InProceedings{BDFR10-ICECCS10,
  author = {Silvia Bindelli and Elisabetta {Di Nitto} and Carlo A. Furia and Matteo Rossi},
  title = {Using Compositionality to Formally Model and Analyze Systems Built of a High Number of Components},
  booktitle = {Proceedings of the 15th IEEE International Conference on Engineering of Complex Computer Systems (ICECCS'10)},
  pages = {85--94},
  year = {2010},
  month = {March},
  publisher = {IEEE},
  acceptancerate = {23\%}
}

Marcello M. Bersani, Carlo A. Furia, Matteo Pradella, and Matteo Rossi.
Integrated Modeling and Verification of Real-Time Systems through Multiple Paradigms.
In Proceedings of the 7th IEEE International Conference on Software Engineering and Formal Methods (SEFM'09).
Pgg. 13–22, IEEE Computer Society Press, November 2009.

Abstract
In this paper we present a technique to model different aspects of the same system with different formalisms, while keeping the various models tightly integrated with one another. In a multi-paradigm approach to modeling, formalisms with different natures are used in combination to describe complementary parts and aspects of the system. This can have a beneficial impact on the modeling activity, as different paradigms can be better suited to describe different aspects of complex systems. While each paradigm provides a different view on the many facets of the system, it is of paramount importance that a coherent comprehensive model emerges from the combination of the various partial descriptions. Our approach leverages the flexibility provided by a bounded satisfiability checker to encode the verification problem of the integrated model in the Boolean satisfiability (SAT) problem; this allows users to carry out formal verification activities both on the whole model and on parts thereof. The effectiveness of the approach is illustrated through the example of a monitoring system.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.

Notes
Some technical details, that are omitted in the paper, can be found in this technical report.

@InProceedings{BFPR09-SEFM09,
  author = {Marcello M. Bersani and Carlo A. Furia and Matteo Pradella and Matteo Rossi},
  title = {Integrated Modeling and Verification of Real-Time Systems through Multiple Paradigms},
  booktitle = {Proceedings of the 7th IEEE International Conference on Software Engineering and Formal Methods (SEFM'09)},
  pages = {13--22},
  year = {2009},
  month = {November},
  publisher = {IEEE Computer Society Press},
  acceptancerate = {35\%}
}

Carlo A. Furia, Matteo Pradella, and Matteo Rossi.
Comments on "Temporal Logics for Real-Time System Specification".
ACM Computing Surveys, 41(2):1–5, February 2009.

Abstract
The article "Temporal Logics for Real-Time System Specification" surveys some of the relevant literature dealing with the use of temporal logics for the specification of real-time systems. Unfortunately, it introduces some imprecisions that mightcreate some confusion in the reader. While a certain degree of informality is certainly useful when addressing a broad audience, imprecisions can negatively impact the legibility of the exposition. We clarify some of its remarks on a few topics, in an effort to contribute to the usefulness of the survey for the reader.

  • The paper in PDF format (updated, improved, and extended text, also published as technical report) and its DOI.

Notes
Publication timeline: Received September 2005; revised July 2007, April 2008; accepted April 2008.

@Article{FPR09-CSUR09,
  author = {Carlo A. Furia and Matteo Pradella and Matteo Rossi},
  title = {Comments on `{T}emporal Logics for Real-Time System Specification'},
  journal = {ACM Computing Surveys},
  volume = {41},
  number = {2},
  year = {2009},
  pages = {1--5},
  publisher = {ACM},
  month = {February},
  note = {Extended version as Technical Report 2008.7, Dipartimento di Elettronica e Informazione, Politecnico di Milano, April 2008}
}

Carlo A. Furia and Paola Spoletini.
Practical Efficient Modular Linear-Time Model-Checking.
In Proceedings of the 6th International Symposium on Automated Technology for Verification and Analysis (ATVA'08).
Lecture Notes in Computer Science, 5311:408–417, Springer-Verlag, October 2008.

Abstract
This paper shows how the modular structure of composite systems can guide the state-space exploration in explicit-state linear-time model-checking and make it more efficient in practice. Given a composite system where every module has input and output variables — and variables of different modules can be connected — a total ordering according to which variables are generated is determined, through heuristics based on graph-theoretical analysis of the modular structure. The technique is shown to outperform standard exploration techniques (that do not take the modular structure information into account) by several orders of magnitude in experiments with Spin models of MTL formulas.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
@InProceedings{FS08-ATVA08,
  author = {Carlo A. Furia and Paola Spoletini},
  title = {Practical Efficient Modular Linear-Time Model-Checking},
  booktitle = {Proceedings of the 6th International Symposium on Automated Technology for Verification and Analysis (ATVA'08)},
  pages = {408--417},
  year = {2008},
  editor = {Sungdeok (Steve) Cha and Jin-Young Choi and Monzoo Kim and Insup Lee and Mahesh Viswanathan},
  volume = {5311},
  series = {Lecture Notes in Computer Science},
  month = {October},
  publisher = {Springer-Verlag},
  acceptancerate = {34\%}
}

Carlo A. Furia and Matteo Rossi.
MTL with Bounded Variability: Decidability and Complexity.
In Proceedings of the 6th International Conference on Formal Modelling and Analysis of Timed Systems (FORMATS'08).
Lecture Notes in Computer Science, 5215:109–123, Springer-Verlag, September 2008.

Abstract
This paper investigates the properties of Metric Temporal Logic (MTL) over models in which time is dense but phenomena are constrained to have bounded variability. Contrary to the case of generic dense-time behaviors, MTL is proved to be fully decidable over models with bounded variability, if the variability bound is given. In these decidable cases, MTL complexity is shown to match that of simpler decidable logics such as MITL. On the contrary, MTL is undecidable if all behaviors with variability bounded by some generic constant are considered, but with an undecidability degree that is lower than in the case of generic behaviors.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.

Notes
Some technical details, that are omitted in the paper, can be found in this technical report.

@InProceedings{FR08-FORMATS08,
  author = {Carlo A. Furia and Matteo Rossi},
  title = {{MTL} with Bounded Variability: Decidability and Complexity},
  booktitle = {Proceedings of the 6th International Conference on Formal Modelling and Analysis of Timed Systems (FORMATS'08)},
  pages = {109--123},
  year = {2008},
  editor = {Franck Cassez and Claude Jard},
  volume = {5215},
  series = {Lecture Notes in Computer Science},
  month = {September},
  publisher = {Springer-Verlag},
  acceptancerate = {45\%}
}

Carlo A. Furia, Matteo Pradella, and Matteo Rossi.
Practical Automated Partial Verification of Multi-Paradigm Real-Time Models.
In Proceedings of the 10th International Conference on Formal Engineering Methods (ICFEM'08).
Lecture Notes in Computer Science, 5256:298–317, Springer-Verlag, October 2008.

Abstract
This article introduces a fully automated verification technique that permits to analyze real-time systems described using a continuous notion of time and a mixture of operational (i.e., automata-based) and descriptive (i.e., logic-based) formalisms. The technique relies on the reduction, under reasonable assumptions, of the continuous-time verification problem to its discrete-time counterpart. This reconciles in a viable and effective way the dense/discrete and operational/descriptive dichotomies that are often encountered in practice when it comes to specifying and analyzing complex critical systems. The article investigates the applicability of the technique through a significant example centered on a communication protocol. Concurrent runs of the protocol are formalized by parallel instances of a Timed Automaton, while the synchronization rules between these instances are specified through Metric Temporal Logic formulas, thus creating a multi-paradigm model. Verification tests run on this model using a bounded satisfiability checker implementing the technique show consistent results and interesting performances.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.

Notes
Some technical details, that are omitted in the paper, can be found in this technical report.

@InProceedings{FPR08-ICFEM08,
  author = {Carlo A. Furia and Matteo Pradella and Matteo Rossi},
  title = {Practical Automated Partial Verification of Multi-Paradigm Real-Time Models},
  booktitle = {Proceedings of the 10th International Conference on Formal Engineering Methods (ICFEM'08)},
  pages = {298--317},
  year = {2008},
  editor = {Shaoying Liu and Tom Maibaum and Keijiro Araki},
  volume = {5256},
  series = {Lecture Notes in Computer Science},
  month = {October},
  publisher = {Springer-Verlag},
  acceptancerate = {32\%}
}

Carlo A. Furia, Marco Mazzucchelli, Paola Spoletini, and Mara Tanelli.
Towards the Exhaustive Verification of Real-Time Aspects in Controller Implementation.
In Proceedings of the 9th IEEE International Symposium on Computer-Aided Control System Design (CACSD'08).
Pgg. 1265–1270, IEEE Press, September 2008.

Abstract
In industrial applications, the number of final products endowed with real-time automatic control systems that manage safety-critical situations has dramatically increased. Thus, it is of growing importance that the control system design flow encompasses also its translation into software code and its embedding into a hardware and software network. In this paper, a tool-supported approach to the formal analysis of real-time aspects in controller implementation is proposed. The analysis can ensure that some desired properties of the control loop are preserved in its implementation on a distributed architecture. Moreover, the tool provides as output information which can be used to approach straightforwardly some design problems, such as hardware sizing in the final implementation.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.

Notes
Several technical details, that are omitted in the paper, can be found in this technical report.

@InProceedings{FMST08-CACSD08,
  author = {Carlo A. Furia and Marco Mazzucchelli and Paola Spoletini and Mara Tanelli},
  title = {Towards the Exhaustive Verification of Real-Time Aspects in Controller Implementation},
  booktitle = {Proceedings of the 9th IEEE International Symposium on Computer-Aided Control System Design (CACSD'08)},
  pages = {1265--1270},
  year = {2008},
  month = {September},
  publisher = {IEEE Press},
  note = {CACSD'08 is part of the 2nd IEEE Multi-conference on Systems and Control}
}

Carlo A. Furia and Paola Spoletini.
Tomorrow and All Our Yesterdays: MTL Satisfiability over the Integers.
In Proceedings of the 5th International Colloquium on Theoretical Aspects of Computing (ICTAC'08).
Lecture Notes in Computer Science, 5160:126–140, Springer-Verlag, September 2008.

Abstract
We investigate the satisfiability problem for metric temporal logic (MTL) with both past and future operators over linear discrete bi-infinite time models isomorphic to the integer numbers, where time is unbounded both in the future and in the past. We provide a technique to reduce satisfiability over the integers to satisfiability over the well-known mono-infinite time model of natural numbers, and we show how to implement the technique through an automata-theoretic approach. We also prove that MTL satisfiability over the integers is EXPSPACE-complete, hence the given algorithm is optimal in the worst case.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.

Notes
Some technical details, that are omitted in the paper, can be found in this technical report.

@InProceedings{FS08-ICTAC08,
  author = {Carlo A. Furia and Paola Spoletini},
  title = {Tomorrow and All Our Yesterdays: {MTL} Satisfiability over the Integers},
  booktitle = {Proceedings of the 5th International Colloquium on Theoretical Aspects of Computing (ICTAC'08)},
  pages = {126--140},
  year = {2008},
  editor = {John S. Fitzgerald and Anne E. Haxthausen and Husnu Yenigun},
  volume = {5160},
  series = {Lecture Notes in Computer Science},
  month = {September},
  publisher = {Springer-Verlag},
  acceptancerate = {38\%}
}

Carlo A. Furia, Matteo Pradella, and Matteo Rossi.
Automated Verification of Dense-Time MTL Specifications via Discrete-Time Approximation.
In Proceedings of the 15th International Symposium on Formal Methods (FM'08).
Lecture Notes in Computer Science, 5014:132–147, Springer-Verlag, May 2008.

Abstract
This paper presents a verification technique for dense-time MTL based on discretization. The technique reduces the validity problem of MTL formulas from dense to discrete time, through the notion of sampling invariance, introduced in previous work [FR06]. Since the reduction is from an undecidable problem to a decidable one, the technique is necessarily incomplete, so it fails to provide conclusive answers for some formulas. The paper discusses this shortcoming and hints at how it can be mitigated in practice. The verification technique has been implemented on top of the Zot tool for discrete-time bounded validity checking; the paper also reports on in-the-small experiments with the tool, which show some results that are promising in terms of performance.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.

Notes
Some technical details, that are omitted in the paper, can be found in this technical report.

@InProceedings{FPR08-FM08,
  author = {Carlo A. Furia and Matteo Pradella and Matteo Rossi},
  title = {Automated Verification of Dense-Time {MTL} Specifications via Discrete-Time Approximation},
  booktitle = {Proceedings of the 15th International Symposium on Formal Methods (FM'08)},
  pages = {132--147},
  year = {2008},
  editor = {Jorge Cu{\'e}llar and Tom Maibaum and Kaisa Sere},
  volume = {5014},
  series = {Lecture Notes in Computer Science},
  month = {May},
  publisher = {Springer-Verlag},
  acceptancerate = {21\%}
}

Carlo A. Furia and Matteo Rossi.
No Need To Be Strict: on the expressiveness of metric temporal logics with (non-)strict operators.
Bulletin of the European Association for Theoretical Computer Science, 92:150–160, June 2007.

Abstract
The until modality of temporal logic "A until B" is called strict in its first argument when it does not constrain the value of the first argument A at the instant at which the formula is evaluated. In this paper, we show that linear metric temporal logics with bounded until modalities that are non-strict in the first argument are as expressive as those with strict modalities, when interpreted over non-Zeno dense-time models.

  • The extended version of the paper in manuscript PDF format (updated, improved, and extended text); and the BEATCS version.
@Article{FR07-BEATCS,
  author = {Carlo A. Furia and Matteo Rossi},
  title = {No Need To Be Strict: on the expressiveness of metric temporal logics with (non-)strict operators},
  journal = {Bulletin of the European Association for Theoretical Computer Science},
  year = {2007},
  volume = {92},
  pages = {150--160},
  month = {June}
}

Carlo A. Furia and Matteo Rossi.
On the Expressiveness of MTL Variants over Dense Time.
In Proceedings of the 5th International Conference on Formal Modelling and Analysis of Timed Systems (FORMATS'07).
Lecture Notes in Computer Science, 4763:163–178, Springer-Verlag, October 2007.

Abstract
The basic modal operator bounded until of Metric Temporal Logic (MTL) comes in several variants. In particular it can be strict (when it does not constrain the current instant) or not, and matching (when it requires its two arguments to eventually hold together) or not. This paper compares the relative expressiveness of the resulting MTL variants over dense time. We prove that the expressiveness is not affected by the variations when considering non-Zeno interpretations and arbitrary nesting of temporal operators. On the contrary, the expressiveness changes for flat (i.e., without nesting) formulas, or when Zeno interpretations are allowed.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.

Notes
Some technical details, that are omitted in the paper, can be found in this technical report.

@InProceedings{FR07-FORMATS07,
  author = {Carlo A. Furia and Matteo Rossi},
  title = {On the Expressiveness of {MTL} Variants over Dense Time},
  booktitle = {Proceedings of the 5th International Conference on Formal Modelling and Analysis of Timed Systems (FORMATS'07)},
  pages = {163--178},
  year = {2007},
  editor = {Jean-Fran{\c c}ois Raskin and P. S. Thiagarajan},
  volume = {4763},
  series = {Lecture Notes in Computer Science},
  month = {October},
  publisher = {Springer-Verlag},
  acceptancerate = {45\%}
}

Carlo A. Furia, Matteo Rossi, Dino Mandrioli, and Angelo Morzenti.
Automated Compositional Proofs for Real-Time Systems.
Theoretical Computer Science, 376(3):164–184, May 2007.

Abstract
We present a framework for formally proving that the composition of the behaviors of the different parts of a complex, real-time system ensures a desired global specification of the overall system. The framework is based on a simple compositional rely/guarantee circular inference rule, plus a methodology concerning the integration of the different parts into a whole system. The reference specification language is the TRIO metric linear temporal logic.
The novelty of our approach with respect to existing compositional frameworks — most of which do not deal explicitly with real-time requirements — consists mainly in its generality and abstraction from any assumptions about the underlying computational model and from any semantic characterizations of the temporal logic language used in the specification. Moreover, the framework deals equally well with continuous and discrete time. It is supported by a tool, implemented on top of the proof-checker PVS, to perform deduction-based verification through theorem-proving of modular real-time axiom systems.
As an example of application, we show the verification of a real-time version of the old-fashioned but still relevant "benchmark" of the dining philosophers problem.

  • The paper in manuscript PDF format (updated, improved, and extended text) and its DOI.

Notes

  • The paper is an extended version of the conference paper with the same title.
  • The paper is published in a special issue of TCS with invited papers from FASE 2004 and 2005.
@Article{FRMM07-TCS07,
  author = {Carlo A. Furia and Matteo Rossi and Dino Mandrioli and Angelo Morzenti},
  title = {Automated Compositional Proofs for Real-Time Systems},
  journal = {Theoretical Computer Science},
  year = {2007},
  volume = {376},
  number = {3},
  pages = {164--184}
}

Carlo A. Furia, Matteo Rossi, and Dino Mandrioli.
Modeling the Environment in Software-Intensive Systems.
In Proceedings of the Workshop on Modeling in Software Engineering (MISE'07), May 2007.
A Workshop of the 29th International Conference on Software Engineering (ICSE'07).

Abstract
In this paper we argue that the modeling activity in the development of software-intensive systems should formalize as much as possible of the environment in which the application being developed operates. We also show that a rich formal model of the environment helps developers clearly state requirements that might typically be considered intrinsically informal (or non-formalizable in general). To illustrate this point, we show how a requirement for "orderly safe traffic" in a traffic system can be modeled, and we briefly discuss the benefits thereof.

  • The paper in manuscript PDF format (updated, improved, and extended text) and its DOI.
@InProceedings{FRM07-MISE07,
  author = {Carlo A. Furia and Matteo Rossi and Dino Mandrioli},
  title = {Modeling the Environment in Software-Intensive Systems},
  booktitle = {Proceedings of the Workshop on Modeling in Software Engineering (MISE'07)},
  year = {2007},
  month = {May},
  note = {A Workshop of the 29th International Conference on Software Engineering (ICSE'07)},
  acceptancerate = {43\%}
}

Carlo A. Furia, Angelo Morzenti, Matteo Pradella, and Matteo Rossi.
Comments on "A Temporal Logic for Real-Time System Specification".
IEEE Transactions on Software Engineering, 32(6):424–427, June 2006. (Comments paper).

Abstract
The paper "An Interval Logic for Real-Time System Specification" presents the TILCO specification language and compares it to other existing similar languages. In this comment, we show that several of the logic formulas used for the comparison are flawed and/or overly complicated, and we explain why, in this respect, the comparison is moot.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.

Notes
The paper comments on: R. Mattolini and P. Nesi, "A Temporal Logic for Real-Time System Specification". IEEE Transactions on Software Engineering, 27(3):208–227, March 2001.

@Article{FMPR06-TSE06comments,
  author = {Carlo A. Furia and Angelo Morzenti and Matteo Pradella and Matteo G. Rossi},
  title = {Comments on `{A} Temporal Logic for Real-Time System Specification'},
  journal = {IEEE Transactions on Software Engineering},
  year = {2006},
  volume = {32},
  number = {6},
  pages = {424--427},
  month = {June},
  note = {Comments paper}
}

Carlo A. Furia and Matteo Rossi.
Integrating Discrete- and Continuous-Time Metric Temporal Logics Through Sampling.
In Proceedings of the 4th International Conference on Formal Modelling and Analysis of Timed Systems (FORMATS'06).
Lecture Notes in Computer Science, 4202:215–229, Springer-Verlag, September 2006.

Abstract
Real-time systems usually encompass parts that are best described by a continuous-time model, such as physical processes under control, together with other components that are more naturally formalized by a discrete-time model, such as digital computing modules. Describing such systems in a unified framework based on metric temporal logic requires to integrate formulas which are interpreted over discrete and continuous time.
In this paper, we tackle this problem with reference to the metric temporal logic TRIO, that admits both a discrete-time and a continuous-time semantics. We identify sufficient conditions under which TRIO formulas have a consistent truth value when moving from continuous-time to discrete-time interpretations, or vice versa. These conditions basically involve the restriction to a proper subset of the TRIO language and a requirement on the finite variability over time of the basic items in the specification formulas. We demonstrate the approach with an example of specification and verification.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.

Notes
Some technical details, that are omitted in the paper, can be found in this technical report.

@InProceedings{FR06-FORMATS06,
  author = {Carlo A. Furia and Matteo Rossi},
  title = {Integrating Discrete- and Continuous-Time Metric Temporal Logics Through Sampling},
  booktitle = {Proceedings of the 4th International Conference on Formal Modelling and Analysis of Timed Systems (FORMATS'06)},
  pages = {215--229},
  year = {2006},
  editor = {Eugene Asarin and Patricia Bouyer},
  volume = {4202},
  series = {Lecture Notes in Computer Science},
  month = {September},
  publisher = {Springer-Verlag},
  acceptancerate = {44\%}
}

Carlo A. Furia, Matteo Rossi, Dino Mandrioli, and Angelo Morzenti.
Automated Compositional Proofs for Real-Time Systems.
In Proceedings of the 8th International Conference on Fundamental Approaches to Software Engineering (FASE'05).
Lecture Notes in Computer Science, 3442:326–340, Springer-Verlag, March 2005.

Abstract
We present a framework for formally proving that the composition of the behaviors of the different parts of a complex, real-time system ensures a desired global specification of the overall system. The framework is based on a simple compositional rely/guarantee circular inference rule, plus a small set of conditions concerning the integration of the different parts into a whole system. The reference specification language is the TRIO metric linear temporal logic.
The novelty of our approach with respect to existing compositional frameworks — most of which do not deal explicitly with real-time requirements — consists mainly in its generality and abstraction from any assumptions about the underlying computational model and from any semantic characterizations of the temporal logic language used in the specification. Moreover, the framework deals equally well with continuous and discrete time. It is supported by a tool, implemented on top of the proof-checker PVS, to perform deduction-based verification through theorem-proving of modular real-time axiom systems.
As an example of application, we show the verification of a real-time version of the old-fashioned but still relevant "benchmark" of the dining philosophers problem.

  • An extended version containing appendices with proofs and details of the case study in PDF format (updated, improved, and extended text), and the DOI of the published shorter version.

Notes
The paper published in the proceedings contains a minor error due to the variation of the time progression operator ->>, including the current instant and denoted as ->>_i, introduced and used to specify the case study. The inference rule of Proposition 1 no longer holds for this variation of the operator: Lemma 1 cannot be proved, since one can no longer rule out the accumulation of the future intervals of the NowOn operator (to see this, consider a right-open interval and what happens on its right boundary). Nonetheless, the case study can still be specified using the base ->> operator, with really minimal changes. Here, of course, you find the patched versions of the paper.

@InProceedings{FRMM05-FASE05,
  author = {Carlo A. Furia and Matteo Rossi and Dino Mandrioli and Angelo Morzenti},
  title = {Automated Compositional Proofs for Real-Time Systems},
  booktitle = {Proceedings of the 8th International Conference on Fundamental Approaches to Software Engineering (FASE'05)},
  pages = {326--340},
  year = {2005},
  editor = {Maura Cerioli},
  volume = {3442},
  series = {Lecture Notes in Computer Science},
  month = {March},
  publisher = {Springer-Verlag},
  note = {Conference held as part of the Joint European Conferences on Theory and Practice of Software (ETAPS'05)},
  acceptancerate = {22\%}
}

Andrea Matta, Carlo A. Furia, and Matteo Rossi.
Semi-Formal and Formal Models Applied to Flexible Manufacturing Systems.
In Proceedings of the 19th International Symposium on Computer and Information Sciences (ISCIS'04).
Lecture Notes in Computer Science, 3280:718–728, Springer-Verlag, October 2004.

Abstract
Flexible Manufacturing Systems (FMSs) are adopted to process different goods in different mix ratios allowing firms to react quickly and efficiently to changes in products and production targets (e.g. volumes, etc.). Due to their high costs, FMSs require careful design, and their performance must be precisely evaluated before final deployment. To support and guide the design phase, this paper presents a UML semi-formal model for FMSs that captures the most prominent aspects of these systems. For a deeper analysis, two refinements could then be derived from the UML intermediate description: simulation components for "empirical" analysis, and an abstract formal model that is suitable for formal verification. In this paper we focus on the latter, based on the TRIO temporal logic. In particular, we hint at a methodology to derive TRIO representations from the corresponding UML descriptions, and apply it to the case of FMSs. A subset of the resulting formal model is then used to verify, through logic deduction, a simple property of the FMS.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.
@InProceedings{MFR04-ISCIS04,
  author = {Andrea Matta and Carlo A. Furia and Matteo Rossi},
  title = {Semi-Formal and Formal Models Applied to Flexible Manufacturing Systems},
  booktitle = {Proceedings of the 19th International Symposium on Computer and Information Sciences (ISCIS'04)},
  pages = {718--728},
  year = {2004},
  editor = {Cevdet Aykanat and Tugrul Dayar and Ibrahim Korpeoglu},
  volume = {3280},
  series = {Lecture Notes in Computer Science},
  month = {October},
  publisher = {Springer-Verlag},
  acceptancerate = {29\%}
}

Carlo A. Furia, and Matteo Rossi.
A Compositional Framework for Formally Verifying Modular Systems.
In Proceedings of the International Workshop on Test and Analysis of Component Based Systems (TACoS'04).
Electronic Notes in Theoretical Computer Science, 116:185–198, Elsevier, March 2004.

Abstract
We present a tool-supported framework for proving that the composition of the behaviors of the separate parts of a complex system ensures a desired global property of the overall system. A compositional inference rule is formally introduced and encoded in the logic of the PVS theorem prover. Methodological considerations on the usage of the inference rule are presented, and the framework is then used to prove a meaningful property of a simple, but significant, control system.

  • The paper in PDF format (updated, improved, and extended text) and its DOI.

Notes
This paper presents sketchily the inference rule also shown, more extensively and within a broader framework, in the FASE'05 paper, and then discusses some details of how it has been implemented in the PVS tools for TRIO.

@InProceedings{FR04-TACoS04,
  author = {Carlo A. Furia and Matteo Rossi},
  title = {A Compositional Framework for Formally Verifying Modular Systems},
  booktitle = {Proceedings of the International Workshop on Test and Analysis of Component Based Systems (TACoS'04)},
  pages = {185--198},
  year = {2004},
  volume = {116},
  series = {Electronic Notes in Theoretical Computer Science},
  month = {January},
  publisher = {Elsevier}
}

Richard Torkar, Carlo A. Furia, and Robert Feldt.
Bayesian Data Analysis for Software Engineering.
In Companion Proceedings of the 43rd International Conference on Software Engineering (ICSE).
Pgg. 328–329, IEEE Computer Society, May 2021.

  • The abstract of this technical briefing (tutorial) is available at this DOI.
  • A video playlist with recordings of the main content of the tutorial.
  • A GitHub repo with the material to try the hands-on exercise shown during the tutorial.
  • There's also the live recording of the actual technical briefing, which includes our answers to questions from the public (as well as the inevitable microphone failure, fortunately only at the very end :-)
@INPROCEEDINGS {TFF21-ICSE21-Tutorial,
   author = {Richard Torkar and Carlo A. Furia and Robert Feldt},
   booktitle = {43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)},
   title = {{B}ayesian Data Analysis for Software Engineering},
   year = {2021},
   pages = {328-329},
   publisher = {IEEE Computer Society},
   note = {Abstract of technical briefing}
}

Carlo A. Furia.
Testing, Fixing, and Proving with Contracts.
In Proceedings of the 9th International Conference on Tests & Proofs (TAP) – A STAF conference.
Lecture Notes in Computer Science, 9154:XIII–XV, July 2015.

This is an invited tutorial I gave at the TAP conference organized by Jasmin Blanchette and Nikolai Kosmatov, where I presented and demoed the most important tools in EVE.

  • An abstract of the talk, with bibliographic references, is available online in the front matter of TAP's proceedings.
  • The slides of the talk are available on SlideShare.
@InProceedings{TAP15-withcontracts,
  author = {Carlo A. Furia},
  title = {Testing, Fixing, and Proving with Contracts},
  booktitle = {Proceedings of the 9th International Conference on Tests \& Proofs (TAP) -- A STAF conference},
  year = {2015},
  editor = {Jasmin Christian Blanchette and Nikolai Kosmatov},
  series = {Lecture Notes in Computer Science},
  volume = {9154},
  pages = {XIII--XV},
  month = {July},
  publisher = {Springer},
  note = {Abstract of invited tutorial},
}

Carlo A. Furia, Julian Tschannen, and Bertrand Meyer.
The Gotthard Approach: Designing an Integrated Verification Environment for Eiffel.
In Proceedings of the 1st Workshop on Formal Integrated Development Environment (F-IDE) – An ETAPS event.
Electronic Proceedings in Theoretical Computer Science, 149:1–2, April 2014.

This is an invited talk I gave at the F-IDE workshop organized by Catherine Dubois, Dimitra Giannakopoulou, and Dominique Mery. Julian Tschannen contributed to the presentation with a live demo of EVE.

@InProceedings{FIDE14-gotthard,
  author = {Carlo A. Furia and Julian Tschannen and Bertrand Meyer},
  title = {The {G}otthard Approach: Designing an Integrated Verification Environment for {E}iffel},
  booktitle = {Proceedings of the 1st Workshop on Formal Integrated Development Environment (F-IDE) -- An ETAPS event},
  year = {2014},
  editor = {Catherine Dubois and Dimitra Giannakopoulou and Dominique M{\'e}ry},
  series = {Electronic Procedings in Theoretical Computer Science},
  volume = {149},
  pages = {1--2},
  month = {April},
  publisher = {EPTCS},
  note = {Abstract of invited talk},
}

Steven Fraser, Luciano Baresi, Jane Cleland-Huang, Carlo A. Furia, Georges Gonthier, Paola Inverardi, and Moshe Y. Vardi.
A publication culture in software engineering (panel).
In Proceedings of the 9th Join Meeting of the European Software Engineering Conference and ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE).
Pgg. 19–23, ACM, August 2013.

In this panel organized by Steven Fraser, we discussed "the publication process in the software engineering community" and "how it serves the needs of the community, [and] whether it is fair", concluding with "a discussion on suggested next steps".

@InProceedings{FSE13-PublicationPanel,
  author = {Steven Fraser and Luciano Baresi and Jane Cleland-Huang and Carlo A. Furia and Georges Gonthier and Paola Inverardi and Moshe Y. Vardi},
  title = {A publication culture in software engineering (panel)},
  booktitle = {Proceedings of the 9th Join Meeting of the European Software Engineering Conference and ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE)},
  pages = {19--23},
  year = {2013},
  month = {August},
  publisher = {ACM}
}

Carlo Alberto Furia.
Scaling Up the Formal Analysis of Real-Time Systems.
PhD Thesis.
Dipartimento di Elettronica e Informazione, Politecnico di Milano, May 2007.

Abstract
We develop techniques and methods to scale the application of formal methods up to large and heterogenous systems. Our solution focuses on real-time systems, and develops along the two aspects of compositionality and integration.
In the first part, we provide a compositional framework for specifying and verifying properties of modular systems, written using metric temporal logics with a descriptive approach. The framework consists in technical as well as methodological features. On the technical side, we provide an array of compositional inference rules to deduce facts about the composition of modules from facts about each component in isolation. Each rule has different features that make it more suitable for certain classes of systems. We also provide a general methodology to guide the formal specification of modular systems, and the application of the inference rules to verify them.
In the second part, we consider the problem of integration for systems with both discrete-time and continuous-time components. We provide suitable conditions under which metric temporal logic formulas have a consistent truth value whether they are interpreted in discrete or in continuous time, and we identify a language subset that meets these conditions. We also characterize the conditions and the language subset with detail. Our approach to integration permits the verification of global properties of heterogenous systems.

@PhdThesis{Fur07-PhD,
  author = {Carlo Alberto Furia},
  title = {Scaling Up the Formal Analysis of Real-Time Systems},
  school = {Dipartimento di Elettronica e Informazione, Politecnico di Milano},
  year = {2007},
  month = {May}
}

Carlo Alberto Furia.
Compositional Proofs for Real-Time Modular Systems.
Master's Thesis (Tesi di Laurea).
Politecnico di Milano, December 2003.

Notes
The "Laurea degree thesis" is basically an extended version of the Master's thesis, and it's entirely written in English.

Folklore note
When I graduated, the degree I earned was simply called "Laurea", and concluded five years of university studies (thus basically corresponding to a Master's). After a reform of public Italian university, five-year-long degrees were split into a three-year-long one plus a two-year-long one, similarly to a Bachelor and Master's. However, the three-year-long degree was now called "Laurea", whereas the one obtained after two more years passed through the names "Laurea Specialistica" (specialized laurea) and the silly-sounding (at least to me) "Laurea Magistrale" (magisterial laurea). I don't know to what names it will be changed to in the future, but I just wanted to explain to which "Laurea" I'm referring to here.

@MastersThesis{Fur03-Laurea,
  author = {Carlo Alberto Furia},
  title = {Compositional Proofs for Real-Time Modular Systems},
  school = {Politecnico di Milano},
  year = {2003},
  month = {December},
  note = {(Tesi di Laurea)}
}

Carlo Alberto Furia.
Compositional Proofs for Real-Time Modular Systems.
Master's Thesis.
University of Illinois at Chicago, October 2003.

@MastersThesis{Fur03-MS,
  author = {Carlo Alberto Furia},
  title = {Compositional Proofs for Real-Time Modular Systems},
  school = {University of Illinois at Chicago},
  year = {2003},
  month = {October}
}

Francisco Gomes de Oliveira Neto, Richard Torkar, Robert Feldt, Lucas Gren, Carlo A. Furia, and Ziwei Huang.
Evolution of statistical analysis in empirical software engineering research: Current state and steps forward.
arXiv.org > cs > 1706.00933, July 2019.

@Misc{EvolutionStats-TR19,
  author = {Francisco {Gomes de Oliveira Neto} and Richard Torkar and Robert Feldt and Lucas Gren and Carlo A. Furia and Ziwei Huang},
  title = {Evolution of statistical analysis in empirical software engineering research: Current state and steps forward},
  howpublished = {\url{https://arxiv.org/abs/1706.00933}},
  month = {July},
  year = {2019}
}

YuTing Chen and Carlo A. Furia.
Robustness Testing of Intermediate Verifiers.
arXiv.org > cs > 1805.03296, May 2018.

Abstract
Program verifiers are not exempt from the bugs that affect nearly every piece of software. In addition, they often exhibit brittle behavior: their performance changes considerably with details of how the input program is expressed — details that should be irrelevant, such as the order of independent declarations. Such a lack of robustness frustrates users who have to spend considerable time figuring out a tool's idiosyncrasies before they can use it effectively.
This paper introduces a technique to detect lack of robustness of program verifiers; the technique is lightweight and fully automated, as it is based on testing methods (such as mutation testing and metamorphic testing). The key idea is to generate many simple variants of a program that initially passes verification. All variants are, by construction, equivalent to the original program; thus, any variant that fails verification indicates lack of robustness in the verifier.
We implemented our technique in a tool called mugie, which operates on programs written in the popular Boogie language for verification — used as intermediate representation in numerous program verifiers. Experiments targeting 135 Boogie programs indicate that brittle behavior occurs fairly frequently (16 programs) and is not hard to trigger Based on these results, the paper discusses the main sources of brittle behavior and suggests means of improving robustness.

@Misc{RobustnessTesting-TR-20180508,
  author = {YuTing Chen and Carlo A. Furia},
  title = {Robustness Testing of Intermediate Verifiers},
  howpublished = {\url{http://arxiv.org/abs/1805.03296}},
  month = {May},
  year = {2018}
}

Carlo A. Furia.
Bayesian Statistics in Software Engineering: Practical Guide and Case Studies.
arXiv.org > cs > 1608.06865, August 2016.

@Misc{BayesianStats-TR-20150828,
  author = {Carlo A. Furia},
  title = {{B}ayesian Statistics in Software Engineering: Practical Guide and Case Studies},
  howpublished = {\url{https://arxiv.org/abs/1608.06865}},
  month = {August},
  year = {2016}
}

Michael Ameri and Carlo A. Furia.
Why Just Boogie? Translating Between Intermediate Verification Languages.
arXiv.org > cs > 1601.00516, January 2016.

Abstract
The verification systems Boogie and Why3 use their respective intermediate languages to generate verification conditions from high-level programs. Since the two systems support different back-end provers (such as Z3 and Alt-Ergo) and are used to encode different high-level languages (such as C# and Java), being able to translate between their intermediate languages would provide a way to reuse one system's features to verify programs meant for the other. This paper describes a translation of Boogie into WhyML (Why3's intermediate language) that preserves semantics, verifiability, and program structure to a large degree. We implemented the translation as a tool and applied it to 194 Boogie-verified programs of various sources and sizes; Why3 verified 84% of the translated programs with the same outcome as Boogie. These results indicate that the translation is often effective and practically applicable.

@Misc{WhyJustBoogie-TR-20160104,
  author = {Michael Ameri and Carlo A. Furia},
  title = {{Why} Just {Boogie}? {T}ranslating Between Intermediate Verification Languages},
  howpublished = {\url{http://arxiv.org/abs/1601.00516}},
  month = {January},
  year = {2016}
}

Julian Tschannen, Carlo A. Furia, Martin Nordio, and Nadia Polikarpova.
AutoProof: Auto-active Functional Verification of Object-oriented Programs.
arXiv.org > cs > 1501.03063, January 2015.

Abstract
Auto-active verifiers provide a level of automation intermediate between fully automatic and interactive: users supply code with annotations as input while benefiting from a high level of automation in the back-end. This paper presents AutoProof, a state-of-the-art auto-active verifier for object-oriented sequential programs with complex functional specifications. AutoProof fully supports advanced object-oriented features and a powerful methodology for framing and class invariants, which make it applicable in practice to idiomatic object-oriented patterns. The paper focuses on describing AutoProof's interface, design, and implementation features, and demonstrates AutoProof's performance on a rich collection of benchmark problems. The results attest AutoProof's competitiveness among tools in its league on cutting-edge functional verification of object-oriented programs.

@Misc{AutoProof-tool-TR-20150114,
  author = {Julian Tschannen and Carlo A. Furia and Martin Nordio and Nadia Polikarpova},
  title = {{AutoProof}: Auto-active Functional Verification of Object-oriented Programs},
  howpublished = {\url{http://arxiv.org/abs/1501.03063}},
  month = {January},
  year = {2015}
}

Sebastian Nanz and Carlo A. Furia.
A Comparative Study of Programming Languages in Rosetta Code.
arXiv.org > cs > 1409.0252, September 2014.

Abstract
Sometimes debates on programming languages are more religious than scientific. Questions about which language is more succinct or efficient, or makes developers more productive are discussed with fervor, and their answers are too often based on anecdotes and unsubstantiated beliefs. In this study, we use the largely untapped research potential of Rosetta Code, a code repository of solutions to common programming tasks in various languages, to draw a fair and well-founded comparison. Rosetta Code offers a large data set for analysis. Our study is based on 7087 solution programs corresponding to 745 tasks in 8 widely used languages representing the major programming paradigms (procedural: C and Go; object-oriented: C# and Java; functional: F# and Haskell; scripting: Python and Ruby). Our statistical analysis reveals, most notably, that: functional and scripting languages are more concise than procedural and object-oriented languages; C is hard to beat when it comes to raw speed on large inputs, but performance differences over inputs of moderate size are less pronounced and allow even interpreted languages to be competitive; compiled strongly-typed languages, where more defects can be caught at compile time, are less prone to runtime failures than interpreted or weakly-typed languages. We discuss implications of these results for developers, language designers, and educators.

@Misc{RosettaCode-TR-02092014,
  author = {Sebastian Nanz and Carlo A. Furia},
  title = {A Comparative Study of Programming Languages in {R}osetta {C}ode},
  howpublished = {\url{http://arxiv.org/abs/1409.0252}},
  month =         {September},
  year =          {2014}
}

Juan P. Galeotti, Carlo A. Furia, Eva May, Gordon Fraser, and Andreas Zeller.
Inferring Loop Invariants by Mutation, Dynamic Analysis, and Static Checking.
arXiv.org > cs > 1407.5286, July 2014, revised December 2014, March 2015.

Abstract
Verifiers that can prove programs correct against their full functional specification require, for programs with loops, additional annotations in the form of loop invariants — properties that hold for every iteration of a loop. We show that significant loop invariant candidates can be generated by systematically mutating postconditions; then, dynamic checking (based on automatically generated tests) weeds out invalid candidates, and static checking selects provably valid ones. We present a framework that automatically applies these techniques to support a program prover, paving the way for fully automatic verification without manually written loop invariants: Applied to 28 methods (including 39 different loops) from various java.util classes (occasionally modified to avoid using Java features not fully supported by the static checker), our DynaMate prototype automatically discharged 97% of all proof obligations, resulting in automatic complete correctness proofs of 25 out of the 28 methods — outperforming several state-of-the-art tools for fully automatic verification.

@Misc{DynaMate-TR-20072014,
  author = {Juan P. Galeotti and Carlo A. Furia and Eva May and Gordon Fraser and Andreas Zeller},
  title = {Automating Full Functional Verification of Programs with Loops},
  howpublished = {\url{http://arxiv.org/abs/1407.5286}},
  month =         {July},
  year =          {2014},
  note = {Last revised in March 2015}
}

Carlo A. Furia.
Rotation of Sequences: Algorithms and Proofs.
arXiv.org > cs > 1406.5453, June 2014, revised November 2014, February 2015.

Abstract
Sequence rotation consists of a circular shift of the sequence's elements by a given number of positions. We present the four classic algorithms to rotate a sequence; the loop invariants underlying their correctness; detailed correctness proofs; and fully annotated versions for the verifiers Boogie, Dafny, and ESC/Java2. The presentation illustrates in detail both how the algorithms work and what it takes to carry out mechanized proofs of their correctness.

@Misc{F14-TR-20062014,
  author = {Carlo A. Furia},
  title = {Rotation of Sequences: Algorithms and Proofs},
  howpublished = {\url{http://arxiv.org/abs/1406.5453}},
  month = {June},
  year = {2014},
  note = {Last revised in February 2015}
}

Yu Pei, Carlo A. Furia, Martin Nordio, Yi Wei, Bertrand Meyer, and Andreas Zeller.
Automated Fixing of Programs with Contracts.
arXiv.org > 1403.1117, March 2014.

Abstract
This paper describes AutoFix, an automatic debugging technique that can fix faults in general-purpose software. To provide high-quality fix suggestions and to enable automation of the whole debugging process, AutoFix relies on the presence of simple specification elements in the form of contracts (such as pre- and postconditions). Using contracts enhances the precision of dynamic analysis techniques for fault detection and localization, and for validating fixes. The only required user input to the AutoFix supporting tool is then a faulty program annotated with contracts; the tool produces a collection of validated fixes for the fault ranked according to an estimate of their suitability.
In an extensive experimental evaluation, we applied AutoFix to over 200 faults in four code bases of different maturity and quality (of implementation and of contracts). AutoFix successfully fixed 42% of the faults, producing, in the majority of cases, corrections of quality comparable to those competent programmers would write; the used computational resources were modest, with an average time per fix below 20 minutes on commodity hardware. These figures compare favorably to the state of the art in automated program fixing, and demonstrate that the AutoFix approach is successfully applicable to reduce the debugging burden in real-world scenarios.

@Misc{AutoFix-TR-05032014,
  author = {Yu Pei and Carlo A. Furia and Martin Nordio and Yi Wei and Bertrand Meyer and Andreas Zeller},
  title = {Automated Fixing of Programs with Contracts},
  howpublished = {\url{http://arxiv.org/abs/1403.1117}},
  month = {March},
  year = {2014}
}

Nadia Polikarpova, Julian Tschannen, Carlo A. Furia, and Bertrand Meyer.
Flexible Invariants Through Semantic Collaboration.
arXiv.org > 1311.6329, November 2013.

Abstract
Modular reasoning about class invariants is challenging in the presence of dependencies among collaborating objects that need to maintain global consistency. This paper presents semantic collaboration: a novel methodology to specify and reason about class invariants of sequential object-oriented programs, which models dependencies between collaborating objects by semantic means. Combined with a simple ownership mechanism and useful default schemes, semantic collaboration achieves the flexibility necessary to reason about complicated inter-object dependencies but requires limited annotation burden when applied to standard specification patterns. The methodology is implemented in AutoProof, our program verifier for the Eiffel programming language (but it is applicable to any language supporting some form of representation invariants). An evaluation on several challenge problems proposed in the literature demonstrates that it can handle a variety of idiomatic collaboration patterns, and is more widely applicable than the existing invariant methodologies.

@Misc{SemiCola-TR13-25112013,
  author = {Nadia Polikarpova and Julian Tschannen and Carlo A. Furia and Bertrand Meyer},
  title = {Flexible Invariants Through Semantic Collaboration},
  howpublished = {\url{http://arxiv.org/abs/1311.6329}},
  month = {November},
  year = {2013}
}

Carlo A. Furia and Paola Spoletini.
Bounded Variability of Metric Temporal Logic.
arXiv.org > 1306.2141, June 2013, revised July 2014.

Abstract
Previous work has shown that reasoning with real-time temporal logics is often simpler when restricted to models with bounded variability—where no more than v events may occur every V time units, for given v, V. When reasoning about formulas with intrinsic bounded variability, one can employ the simpler techniques that rely on bounded variability, without any loss of generality. What is then the complexity of algorithmically deciding which formulas have intrinsic bounded variability?
In this paper, we study the problem with reference to Metric Temporal Logic (MTL). We prove that deciding bounded variability of MTL formulas is undecidable over dense-time models, but with a undecidability degree lower than generic dense-time MTL satisfiability. Over discrete-time models, instead, deciding MTL bounded variability has the same exponential-space complexity as satisfiability. To complement these negative results, we also briefly discuss small fragments of MTL that are more amenable to reasoning about bounded variability.

@Misc{BoundedVarMTL-TR13-10062013,
  author = {Carlo A. Furia and Paola Spoletini},
  title = {Bounded Variability of Metric Temporal Logic},
  howpublished = {\url{http://arxiv.org/abs/1306.2141}},
  month = {June},
  year = {2013},
  note = {Last revised in July 2014}
}

Carlo A. Furia and Dino Mandrioli.
Turing: la vita, l'opera, l'impatto.
La Matematica nella Società e nella Cultura – Rivista dell'Unione Matematica Italiana.
Series I, V(2):105–148, August 2012 (in Italian).

Abstract
Written on the occasion of his birth's 100th anniversary, this article revisits the main steps of Alan Turing's life, and illustrates his fundamental contributions and his preeminent impact in the foundation of modern computer science. After an initial section that outlines Turing's biography and his most important results, we give a detailed presentation -- tailored to a general public with basic mathematical background but no expertise in theoretical computer science -- of what is possibly the most important and best known result of Turing's: his 1937 paper on the theory of computation, which introduces the class of automata known as "Turing machines". Finally, in the closing section we offer some historical/philosophical remarks about the notion of computation and its intrinsic limitations; this shows how Turing's contribution goes well beyond the limits of pure theoretical speculation as it carries practical and cultural implications that are general and still markedly relevant to the present time.

  • The paper in PDF format.
@Article{FM12-TURING12,
  author = 		  {Carlo A. Furia and Dino Mandrioli},
  title = 		  {{T}uring: la vita, l'opera, l'impatto},
  journal = 	  {La Matematica nella Societ{\`a} e nella Cultura -- Rivista dell'Unione Matematica Italiana},
  year = 		  {2012},
  volume = 	  {V},
  number = 	  {2},
  pages = 	  {105--148},
  month = 	  {August},
  note = 	  {Series I; in Italian}
}

H.-Christian Estler, Carlo A. Furia, Martin Nordio, Marco Piccioni, and Bertrand Meyer.
Contracts in Practice.
arXiv.org > 1211.4775, August 2012 (latest revision in February 2014).

Abstract
Contracts are a form of lightweight formal specification embedded in the program text. Being executable parts of the code, they encourage programmers to devote proper attention to specifications, and help maintain consistency between specification and implementation as the program evolves. The present study investigates how contracts are used in the practice of software development. Based on an extensive empirical analysis of 21 contract-equipped Eiffel, C#, and Java projects totaling more than 260 million lines of code over 7700 revisions, it explores, among other questions: 1) which kinds of contract elements (preconditions, postconditions, class invariants) are used more often; 2) how contracts evolve over time; 3) the relationship between implementation changes and contract changes; and 4) the role of inheritance in the process. It has found, among other results, that: the percentage of program elements that include contracts is above 33% for most projects and tends to be stable over time; there is no strong preference for a certain type of contract element; contracts are quite stable compared to implementations; and inheritance does not significantly affect qualitative trends of contract usage.

@Misc{HowSpecChange-TR-17082012,
  author = {H.-Christian Estler and Carlo A. Furia and Martin Nordio and Marco Piccioni and Bertrand Meyer},
  title = {Contracts in Practice},
  howpublished = {\url{http://arxiv.org/abs/1211.4775}},
  month = {August},
  year = {2012},
  note = {Latest revision: February 2014}
}

Carlo A. Furia, Bertrand Meyer, and Sergey Velder.
Loop Invariants: Analysis, Classification, and Examples.
arXiv.org > 1211.4470, November 2012, revised June 2013, January 2014.

Abstract
A key step in the verification of imperative programs is to identify, for every loop, a loop invariant: a property ensured by the initialization and maintained by every iteration so that, when combined with the exit condition, it yields the loops final effect. The invariant is a fundamental characteristic of any loop, useful not only to prove its correctness but also simply to understand why it works. Expanding on this observation, we analyze fundamental algorithms from a wide range of application areas of computer science and study their invariants, showing that they belong to a small number of general patterns identified in the article. The conclusions also include suggestions for automatic invariant inference, and general techniques for model-based specification.

  • The report in PDF format.
  • Implementations in Boogie of the algorithms discussed in the paper are available in this repository under branch inv_survey. You can get it with the command line hg clone -u inv_survey https://bitbucket.org/sechairethz/verified.
@Misc{LoopInvariantSurvey-TR-19112012,
  author = {Carlo A. Furia and Bertrand Meyer and Sergey Velder},
  title = {Loop Invariants: Analysis, Classification, and Examples},
  howpublished = {\url{http://arxiv.org/abs/1211.4470}},
  month = {November},
  year = {2012},
  note = {Revised in June 2013, January 2014}
}

Carlo A. Furia, Bertrand Meyer, Manuel Oriol, Andrey Tikhomirov, and Yi Wei.
The Search for the Laws of Automatic Random Testing.
arXiv.org > 1211.3257, November 2012.

Abstract
Can one estimate the number of remaining faults in a software system? A credible estimation technique would be immensely useful to project managers as well as customers. It would also be of theoretical interest, as a general law of software engineering. We investigate possible answers in the context of automated random testing, a method that is increasingly accepted as an effective way to discover faults. Our experimental results, derived from best-fit analysis of a variety of mathematical functions, based on a large number of automated tests of library code equipped with automated oracles in the form of contracts, suggest a poly-logarithmic law. Although further confirmation remains necessary on different code bases and testing techniques, we argue that understanding the laws of testing may bring significant benefits for estimating the number of detectable faults and comparing different projects and practices.

@Misc{LawsOfTesting-TR-14112012,
  author = {Carlo A. Furia and Bertrand Meyer and Manuel Oriol and Andrey Tikhomirov and Yi Wei},
  title = {The Search for the Laws of Automatic Random Testing},
  howpublished = {\url{http://arxiv.org/abs/1211.3257}},
  month = {November},
  year = {2012}
}

Nadia Polikarpova, Carlo A. Furia, Yu Pei, Yi Wei, and Bertrand Meyer.
What Good Are Strong Specifications?
arXiv.org > 1208.3337, August 2012.

Abstract
Experience with lightweight formal methods suggests that programmers are willing to write specification if it brings tangible benefits to their usual development activities. This paper considers stronger specifications and studies whether they can be deployed as an incremental practice that brings additional benefits without being unacceptably expensive. We introduce a methodology that extends Design by Contract to write strong specifications of functional properties in the form of preconditions, postconditions, and invariants. The methodology aims at being palatable to developers who are not fluent in formal techniques but are comfortable with writing simple specifications. We evaluate the cost and the benefits of using strong specifications by applying the methodology to testing data structure implementations written in Eiffel and C#. In our extensive experiments, testing against strong specifications detects twice as many bugs as standard contracts, with a reasonable overhead in terms of annotation burden and runtime performance while testing. In the wide spectrum of formal techniques for software quality, testing against strong specifications lies in a "sweet spot" with a favorable benefit to effort ratio.

@Misc{WhatGoodStrongSpec-TR-16082012,
  author = {Nadia Polikarpova and Carlo A. Furia and Yu Pei and Yi Wei and Bertrand Meyer},
  title = {What Good Are Strong Specifications?},
  howpublished = {\url{http://arxiv.org/abs/1208.3337}},
  month = {August},
  year = {2012}
}

Marco Trudel, Carlo A. Furia, Martin Nordio, Bertrand Meyer, and Manuel Oriol.
C to O-O Translation: Beyond the Easy Stuff
arXiv.org > cs > 1206.5648, June 2012, revised April 2013.

Abstract
Can we reuse some of the huge code-base developed in C to take advantage of modern programming language features such as type safety, object-orientation, and contracts? This paper presents a source-to-source translation of C code into Eiffel, a modern object-oriented programming language, and the supporting tool C2Eif. The translation is completely automatic and supports the entire C language (ANSI, as well as many GNU C Compiler extensions, through CIL) as used in practice, including its usage of native system libraries and inlined assembly code. Our experiments show that C2Eif can handle C applications and libraries of significant size (such as vim and libgsl), as well as challenging benchmarks such as the GCC torture tests. The produced Eiffel code is functionally equivalent to the original C code, and takes advantage of some of Eiffel's object-oriented features to produce safe and easy-to-debug translations.

@Misc{C2Eif12-TR-26062012,
  author = {Marco Trudel and Carlo A. Furia and Martin Nordio and Bertrand Meyer and Manuel Oriol},
  title = {{C} to {O-O} Translation: Beyond the Easy Stuff},
  howpublished = {\url{http://arxiv.org/abs/1206.5648}},
  month = {June},
  year = {2012},
  note = {Latest revision: April 2013}
}

Carlo A. Furia.
Asynchronous Multi-Tape Automata Intersection: Undecidability and Approximation
arXiv.org > cs > 1206.4860, June 2012, latest revision February 2014.

Abstract
When their reading heads are allowed to move completely asynchronously, finite-state automata with multiple tapes achieve a significant expressive power, but also lose useful closure properties—closure under intersection, in particular. This paper investigates to what extent it is still feasible to use multi-tape automata as recognizer of polyadic predicates on words. On the negative side, determining whether the intersection of asynchronous multi-tape automata is expressible is not even semidecidable. On the positive side, we present an algorithm that computes under-approximations of the intersection; and discuss simple conditions under which it can construct complete intersections. A prototype implementation and a few non-trivial examples demonstrate the algorithm in practice.

@Misc{F12-TR-21062012,
  author = {Carlo A. Furia},
  title = {Asynchronous Multi-Tape Automata Intersection: Undecidability and Approximation},
  howpublished = {\url{http://arxiv.org/abs/1206.4860}},
  month = {June},
  year = {2012},
  note = {Latest revision: February 2014}
}

Carlo A. Furia.
A Survey of Multi-Tape Automata
arXiv.org > cs > 1205.0178, May 2012.

Abstract
This paper summarizes the fundamental expressiveness, closure, and decidability properties of various finite-state automata classes with multiple input tapes. It also includes an original algorithm for the intersection of one-way nondeterministic finite-state automata.

@Misc{F12-TR-01052012,
  author = {Carlo A. Furia},
  title = {A Survey of Multi-Tape Automata},
  howpublished = {\url{http://arxiv.org/abs/1205.0178}},
  month = {May},
  year = {2012},
  note = {Latest revision: November 2013}
}

Martin Nordio, H.-Christian Estler, Carlo A. Furia, and Bertrand Meyer
Collaborative Software Development on the Web.
arXiv.org > cs > 1105.0768, September 2011.

Abstract
Software development environments (IDEs) have not followed the IT industry's inexorable trend towards distribution. They do too little to address the problems raised by today's increasingly distributed projects; neither do they facilitate collaborative and interactive development practices. A consequence is the continued reliance of today's IDEs on paradigms such as traditional configuration management, which were developed for earlier modes of operation and hamper collaborative projects. This contribution describes a new paradigm: cloud-based development, which caters to the specific needs of distributed and collaborative projects. The CloudStudio IDE embodies this paradigm by enabling developers to work on a shared project repository. Configuration management becomes unobtrusive; it replaces the explicit update-modify-commit cycle by interactive editing and real-time conflict tracking and management. A case study involving three teams of pairs demonstrates the usability of CloudStudio and its advantages for collaborative software development over traditional configuration management practices.

@Misc{NEFM-TR-20110919,
  author = {Martin Nordio and  H.-Christian Estler and Carlo A. Furia and Bertrand Meyer},
  title = {Collaborative Software Development on the Web},
  howpublished = {\url{http://arxiv.org/abs/1105.0768}},
  month = {September},
  year = {2011}
}

Yi Wei, Hannes Roth, Carlo A. Furia, Yu Pei, Alexander Horton, Michael Steindorfer, Martin Nordio, and Bertrand Meyer
Stateful Testing: Finding More Errors in Code and Contracts.
arXiv.org > cs > 1108.1068, August 2011.

Abstract
Automated random testing has shown to be an effective approach to finding faults but still faces a major unsolved issue: how to generate test inputs diverse enough to find many faults and find them quickly. Stateful testing, the automated testing technique introduced in this article, generates new test cases that improve an existing test suite. The generated test cases are designed to violate the dynamically inferred contracts (invariants) characterizing the existing test suite. As a consequence, they are in a good position to detect new errors, and also to improve the accuracy of the inferred contracts by discovering those that are unsound.
Experiments on 13 data structure classes totalling over 28,000 lines of code demonstrate the effectiveness of stateful testing in improving over the results of long sessions of random testing: stateful testing found 68.4% new faults and improved the accuracy of automatically inferred contracts to over 99%, with just a 7% time overhead.

@Misc{WRFPHSNM-TR-20110804,
  author = {Yi Wei and Hannes Roth and Carlo A. Furia and Yu Pei and Alexander Horton and Michael Steindorfer and Martin Nordio and Bertrand Meyer},
  title = {Stateful Testing: Finding More Errors in Code and Contracts},
  howpublished = {\url{http://arxiv.org/abs/1108.1068}},
  month = {August},
  year = {2011}
}

Julian Tschannen, Carlo A. Furia, Martin Nordio, and Bertrand Meyer
Verifying Eiffel Programs with Boogie.
arXiv.org > cs > 1106.4700, June 2011.

Abstract
Static program verifiers such as Spec#, Dafny, jStar, and VeriFast define the state of the art in automated functional verification techniques. The next open challenges are to make verification tools usable even by programmers not fluent in formal techniques. This paper presents AutoProof, a verification tool that translates Eiffel programs to Boogie and uses the Boogie verifier to prove them. In an effort to be usable with real programs, AutoProof fully supports several advanced object-oriented features including polymorphism, inheritance, and function objects. AutoProof also adopts simple strategies to reduce the amount of annotations needed when verifying programs (e.g., frame conditions). The paper illustrates the main features of AutoProof's translation, including some whose implementation is underway, and demonstrates them with examples and a case study.

  • The report in PDF format.
  • This work has been presented at the First International Workshop on Intermediate Verification Languages (Boogie'11), held in Wroclaw, Poland, on 1 August 2011.
@Misc{TFNM11-TR-20110624,
  author = {Julian Tschannen and Carlo A. Furia and Martin Nordio and Bertrand Meyer},
  title = {Verifying {E}iffel Programs with {B}oogie},
  howpublished = {\url{http://arxiv.org/abs/1106.4700}},
  month = {June},
  year = {2011}
}

Carlo A. Furia
QFIS – A verifier for the theory of quantifier-free integer sequences.
User manual, v. 1.1, 2011–2012.

  • The manual in PDF format.
  • The theory of quantifier-free integer sequences is described in this paper.
  • QFIS is available for download from this page.
  • Version 0.1 was released on 24 March 2011.
@Misc{QFIS-manual,
  author = {Carlo A. Furia},
  title = {{QFIS} -- A verifier for the theory of quantifier-free integer sequences},
  howpublished = {User manual, v.~1.1},
  year = {2011--2012}
}

Yu Pei, Yi Wei, Carlo A. Furia, Martin Nordio, and Bertrand Meyer
Code-Based Automated Program Fixing.
arXiv.org > cs > 1102.1059, February 2011, revised August 2011.

Abstract
Many programmers, when they encounter an error, would like to have the benefit of automatic fix suggestions—as long as they are, most of the time, adequate. Initial research in this direction has generally limited itself to specific areas, such as data structure classes with carefully designed interfaces, and relied on simple approaches.
To provide high-quality fix suggestions in a broad area of applicability, the present work relies on the presence of contracts in the code, and on the availability of dynamic analysis to gather evidence on the values taken by expressions derived from the program text.
The ideas have been built into the AutoFix-E2 automatic fix generator. Applications of AutoFix-E2 to general-purpose software, such as a library to manipulate documents, show that the approach provides an improvement over previous techniques, in particular purely model-based approaches.

@Misc{PWFNM11-TR-20110205,
  author = {Yu Pei and Yi Wei and Carlo A. Furia and Martin Nordio and Bertrand Meyer},
  title = {Code-Based Automated Program Fixing},
  howpublished = {\url{http://arxiv.org/abs/1102.1059}},
  month = {February},
  year = {2011},
  note = {Revised in August 2011}
}

Carlo A. Furia.
Review of The Calculus of Computation by A. R. Bradley and Z. Manna.
ACM SIGACT News, 42(1):32–35, March 2011.

@Article{Fur11-Review-BradleyManna,
  author = {Carlo A. Furia},
  title = {Review of \emph{The Calculus of Computation} by {A.\ R.\ Bradley} and {Z.\ Manna}},
  journal = {ACM SIGACT News},
  year = {2011},
  volume = {42},
  number = {1},
  pages = {32--35},
  month = {March}
}

Carlo A. Furia, Alberto Leva, Martina Maggio, and Paola Spoletini.
A control-theoretical methodology for the scheduling problem.
arXiv.org > cs > 1009.3455, September 2010.

Abstract
This paper presents a novel methodology to develop scheduling algorithms. The scheduling problem is phrased as a control problem, and control-theoretical techniques are used to design a scheduling algorithm that meets specific requirements. Unlike most approaches to feedback scheduling, where a controller integrates a "basic" scheduling algorithm and dynamically tunes its parameters and hence its performances, our methodology essentially reduces the design of a scheduling algorithm to the synthesis of a controller that closes the feedback loop. This approach allows the re-use of control-theoretical techniques to design efficient scheduling algorithms; it frames and solves the scheduling problem in a general setting; and it can naturally tackle certain peculiar requirements such as robustness and dynamic performance tuning. A few experiments demonstrate the feasibility of the approach on a real-time benchmark.

@Misc{FLMS10-TR-20092010,
  author = {Carlo A. Furia and Alberto Leva and Martina Maggio and Paola Spoletini},
  title = {A control-theoretical methodology for the scheduling problem},
  howpublished = {\url{http://arxiv.org/abs/1009.3455}},
  month = {September},
  year = {2010}
}

Nadia Polikarpova, Carlo A. Furia, and Bertrand Meyer.
Specifying Reusable Components.
arXiv.org > cs > 1003.5777, March 2010.

Abstract
Reusable software components need well-defined interfaces, rigorously and completely documented features, and a design amenable both to reuse and to formal verification; all these requirements call for expressive specifications. This paper outlines a rigorous foundation to model-based contracts, a methodology to equip classes with expressive contracts supporting the accurate design, implementation, and formal verification of reusable components. Model-based contracts conservatively extend the classic Design by Contract by means of expressive models based on mathematical notions, which underpin the precise definitions of notions such as abstract equivalence and specification completeness. Preliminary experiments applying model-based contracts to libraries of data structures demonstrate the versatility of the methodology and suggest that it can introduce rigorous notions, but still intuitive and natural to use in practice.

@Misc{PFM10-TR-30032010,
  author = {Nadia Polikarpova and Carlo A. Furia and Bertrand Meyer},
  title = {Specifying Reusable Components},
  howpublished = {\url{http://arxiv.org/abs/1003.5777}},
  month = {March},
  year = {2010}
}

Paul Z. Kolano, Carlo A. Furia, Richard A. Kemmerer, and Dino Mandrioli.
Refinement and Verification of Real-Time Systems.
arXiv.org > cs > 1002.1796, February 2010.

Abstract
This paper discusses highly general mechanisms for specifying the refinement of a real-time system as a collection of lower level parallel components that preserve the timing and functional requirements of the upper level specification. These mechanisms are discussed in the context of ASTRAL, which is a formal specification language for real-time systems. Refinement is accomplished by mapping all of the elements of an upper level specification into lower level elements that may be split among several parallel components. In addition, actions that can occur in the upper level are mapped to actions of components operating at the lower level. This allows several types of implementation strategies to be specified in a natural way, while the price for generality (in terms of complexity) is paid only when necessary. The refinement mechanisms are first illustrated using a simple digital circuit; then, through a highly complex phone system; finally, design guidelines gleaned from these specifications are presented.

@Misc{KFKM10-TR-09022010,
  author = {Paul Z. Kolano and Carlo A. Furia and Richard A. Kemmerer and Dino Mandrioli},
  title = {Refinement and Verification of Real-Time Systems},
  howpublished = {\url{http://arxiv.org/abs/1002.1796}},
  month = {February},
  year = {2010}
}

Carlo A. Furia.
What's Decidable About Sequences?
arXiv.org > cs > 1001.2100, January 2010.

Abstract
We present a first-order theory of sequences with integer elements, Presburger arithmetic, and regular constraints, which can model significant properties of data structures such as arrays and lists. We give a decision procedure for the quantifier-free fragment, based on an encoding into the first-order theory of concatenation; the procedure has PSPACE complexity. The quantifier-free fragment of the theory of sequences can express properties such as sortedness and injectivity, as well as Boolean combinations of periodic and arithmetic facts relating the elements of the sequence and their positions (e.g., "for all even i's, the element at position i has value i+3 or 2i"). The resulting expressive power is orthogonal to that of the most expressive decidable logics for arrays. Some examples demonstrate that the fragment is also suitable to reason about sequence-manipulating programs within the standard framework of axiomatic semantics.

@Misc{F10-TR-13012010,
  author = {Carlo A. Furia},
  title = {What's Decidable About Sequences?},
  howpublished = {\url{http://arxiv.org/abs/1001.2100}},
  month = {January},
  year = {2010}
}

Carlo A. Furia and Matteo Rossi.
A Theory of Sampling for Continuous-time Metric Temporal Logic.
arXiv.org > cs > 0911.5642, November 2009.

Abstract
This paper revisits the classical notion of sampling in the setting of real-time temporal logics for the modeling and analysis of systems. The relationship between the satisfiability of Metric Temporal Logic (MTL) formulas over continuous-time models and over discrete-time models is studied; more precisely, it is shown to what extent discrete-time sequences obtained by sampling continuous-time signals capture the semantics of MTL formulas over the two time domains. These results are applied to the problem of reducing the verification problem for MTL over continuous-time models to the same problem over discrete-time, resulting in an automated partial practically-efficient discretization technique.

@Misc{FR09-TR-30112009,
  author = {Carlo A. Furia and Matteo Rossi},
  title = {A Theory of Sampling for Continuous-time {M}etric {T}emporal {L}ogic},
  howpublished = {\url{http://arxiv.org/abs/0911.5642}},
  month = {November},
  year = {2009}
}

Carlo A. Furia and Bertrand Meyer.
Inferring Loop Invariants using Postconditions.
arXiv.org > cs > 0909.0884, September 2009.

Abstract
One of the obstacles in automatic program proving is to obtain suitable loop invariants. The invariant of a loop is a weakened form of its postcondition (the loop's goal, also known as its contract); the present work takes advantage of this observation by using the postcondition as the basis for invariant inference, using various heuristics such as "uncoupling" which prove useful in many important algorithms. Thanks to these heuristics, the technique is able to infer invariants for a large variety of loop examples. We present the theory behind the technique, its implementation (freely available for download and currently relying on Microsoft Research's Boogie tool), and the results obtained.

@Misc{FM09-TR-04092009,
  author = {Carlo A. Furia and Bertrand Meyer},
  title = {Inferring Loop Invariants using Postconditions},
  howpublished = {\url{http://arxiv.org/abs/0909.0884}},
  month = {September},
  year = {2009}
}

Marcello M. Bersani, Carlo A. Furia, Matteo Pradella, and Matteo Rossi.
Integrated Modeling and Verification of Real-Time Systems through Multiple Paradigms.
arXiv.org > cs > 0907.5074, July 2009.

Abstract
Complex systems typically have many different parts and facets, with different characteristics. In a multi-paradigm approach to modeling, formalisms with different natures are used in combination to describe complementary parts and aspects of the system. This can have a beneficial impact on the modeling activity, as different paradigms can be better suited to describe different aspects of the system. While each paradigm provides a different view on the many facets of the system, it is of paramount importance that a coherent comprehensive model emerges from the combination of the various partial descriptions. In this paper we present a technique to model different aspects of the same system with different formalisms, while keeping the various models tightly integrated with one another. In addition, our approach leverages the flexibility provided by a bounded satisfiability checker to encode the verification problem of the integrated model in the propositional satisfiability (SAT) problem; this allows users to carry out formal verification activities both on the whole model and on parts thereof. The effectiveness of the approach is illustrated through the example of a monitoring system.

@Misc{BFPR09-TR-29072009,
  author = {Marcello M. Bersani and Carlo A. Furia and Matteo Pradella and Matteo Rossi},
  title = {Integrated Modeling and Verification of Real-Time Systems through Multiple Paradigms},
  howpublished = {\url{http://arxiv.org/abs/0907.5074}},
  month = {July},
  year = {2009}
}

Carlo A. Furia and Paola Spoletini.
On Relaxing Metric Information in Linear Temporal Logic.
arXiv.org > cs > 0906.4711, June 2009, revised April 2010, April 2011, June 2011.

Abstract
Metric LTL formulas rely on the next operator to encode time distances, whereas qualitative LTL formulas use only the until operator. This paper shows how to transform any metric LTL formula M into a qualitative formula Q, such that Q is satisfiable if and only if M is satisfiable over words with variability bounded with respect to the largest distances used in M (i.e., occurrences of next), but the size of Q is independent of such distances. Besides the theoretical interest, this result can help simplify the verification of systems with time-granularity heterogeneity, where large distances are required to express the coarse-grain dynamics in terms of fine-grain time units.

  • The report in PDF format (June 2011 version).

Notes
The April 2010 version of the report is a major revision that eliminates the exponential blow-up in the transformation from M to Q (present in the June 2009 version of the report). The April 2011 version further refines the main result of the paper in considering the satisfiability of Q over unconstrained words. The June 2011 version is a minor revision which fixes some typos.

@Misc{FS09-TR-RelaxingMetricLTL,
  author = {Carlo A. Furia and Paola Spoletini},
  title = {On Relaxing Metric Information in Linear Temporal Logic},
  howpublished = {\url{http://arxiv.org/abs/0906.4711}},
  month = {June},
  year = {2009},
  note = {Last revised in June 2011}
}

Carlo A. Furia and Matteo Rossi.
MTL with Bounded Variability: Decidability and Complexity.
Technical Report 2008.10.
Dipartimento di Elettronica e Informazione, Politecnico di Milano, May 2008.

Abstract
This paper investigates the properties of Metric Temporal Logic (MTL) over models in which time is dense but phenomena are constrained to have bounded variability. Contrary to the case of generic dense-time behaviors, MTL is proved to be fully decidable over models with bounded variability, if the variability bound is given. In these decidable cases, MTL complexity is shown to match that of simpler decidable logics such as MITL. On the contrary, MTL is undecidable if all behaviors with variability bounded by some generic constant are considered, but with an undecidability degree that is lower than in the case of generic behaviors.

@TechReport{FR08-TR2008-10,
  author = {Carlo A. Furia and Matteo Rossi},
  title = {{MTL} with Bounded Variability: Decidability and Complexity},
  institution = {Dipartimento di Elettronica e Informazione, Politecnico di Milano},
  year = {2008},
  number = {2008.10},
  month = {May}
}

Carlo A. Furia, Matteo Pradella, and Matteo Rossi.
Practical Automated Partial Verification of Multi-Paradigm Real-Time Models.
arXiv.org > cs > 0804.4383, April 2008.

Abstract
This article introduces a fully automated verification technique that permits to analyze real-time systems described using a continuous notion of time and a mixture of operational (i.e., automata-based) and descriptive (i.e., logic-based) formalisms. The technique relies on the reduction, under reasonable assumptions, of the continuous-time verification problem to its discrete-time counterpart. This reconciles in a viable and effective way the dense/discrete and operational/descriptive dichotomies that are often encountered in practice when it comes to specifying and analyzing complex critical systems. The article investigates the applicability of the technique through a significant example centered on a communication protocol. More precisely, concurrent runs of the protocol are formalized by parallel instances of a Timed Automaton, while the synchronization rules between these instances are specified through Metric Temporal Logic formulas, thus creating a multi-paradigm model. Verification tests run on this model using a bounded validity checker implementing the technique show consistent results and interesting performances.

@Misc{FPR08-TR-28042008,
  author = {Carlo A. Furia and Matteo Pradella and Matteo Rossi},
  title = {Practical Automated Partial Verification of Multi-Paradigm Real-Time Models},
  howpublished = {\url{http://arxiv.org/abs/0804.4383}},
  month = {April},
  year = {2008}
}

Carlo A. Furia and Paola Spoletini.
MTL Satisfiability over the Integers.
Technical Report 2008.2.
Dipartimento di Elettronica e Informazione, Politecnico di Milano, February 2008.

Abstract
We investigate the satisfiability problem for metric temporal logic (MTL) with both past and future operators over linear discrete bi-infinite time models — where time is unbounded both in the future and in the past — isomorphic to the integer numbers. We provide a technique to reduce satisfiability over the integers to satisfiability over the well-known mono-infinite time model of natural numbers, and we show how to implement the technique through an automata-theoretic approach. We also prove that MTL satisfiability over the integers is EXPSPACE-complete, hence the given algorithm is optimal in the worst case.

@TechReport{FS08-TR2008-2,
  author = {Carlo A. Furia and Paola Spoletini},
  title = {{MTL} Satisfiability over the Integers},
  institution = {Dipartimento di Elettronica e Informazione, Politecnico di Milano},
  year = {2008},
  number = {2008.2},
  month = {February}
}

Carlo A. Furia, Marco Mazzucchelli, Paola Spoletini, and Mara Tanelli.
Towards the Exhaustive Verification of Real-Time Aspects in Controller Implementation.
Technical Report 2008.1.
Dipartimento di Elettronica e Informazione, Politecnico di Milano, January 2008.

Abstract
In industrial applications, the number of final products endowed with real-time automatic control systems that manage critical situations as far as human safety is concerned has dramatically increased. Thus, it is of growing importance that the control system design flow encompasses also its translation into software code and its embedding into a hardware and software network. In this paper, a tool-supported approach to the formal analysis of real-time aspects in controller implementation is proposed. The analysis can ensure that some desired properties of the control loop are preserved in its implementation on a distributed architecture. Moreover, the information extracted automatically from the model can also be used to approach straightforwardly some design problems, such as the hardware sizing in the final implementation.

@TechReport{FMST08-TR2008-1,
  author = {Carlo A. Furia and Marco Mazzucchelli and Paola Spoletini and Mara Tanelli},
  title = {Towards the Exhaustive Verification of Real-Time Aspects in Controller Implementation},
  institution = {Dipartimento di Elettronica e Informazione, Politecnico di Milano},
  year = {2008},
  number = {2008.1},
  month = {January}
}

Carlo A. Furia and Matteo Rossi.
On the Expressiveness of MTL Variants over Dense Time.
Technical Report 2007.41.
Dipartimento di Elettronica e Informazione, Politecnico di Milano, May 2007.

Abstract
The basic modal operator bounded until of Metric Temporal Logic (MTL) comes in several variants. In particular it can be strict (when it does not constraint the current instant) or not, and matching (when it requires its two arguments to eventually hold together) or not. This paper compares the relative expressiveness of the resulting MTL variants over dense time. We prove that the expressiveness is not affected by the variations when considering non-Zeno interpretations and arbitrary nesting of temporal operators. On the contrary, the expressiveness changes for flat (i.e., without nesting) formulas, or when Zeno interpretations are allowed.

@TechReport{FR07-TR2007-41,
  author = {Carlo A. Furia and Matteo Rossi},
  title = {On the Expressiveness of {MTL} Variants over Dense Time},
  institution = {Dipartimento di Elettronica e Informazione, Politecnico di Milano},
  year = {2007},
  number = {2007.41},
  month = {May}
}

Carlo A. Furia, Matteo Pradella, and Matteo Rossi.
Dense-Time MTL Verification Through Sampling.
Technical Report 2007.37.
Dipartimento di Elettronica e Informazione, Politecnico di Milano, April 2007.

Abstract
This paper presents a verification technique for dense-time MTL based on discretization. The technique reduces the validity problem of MTL formulas from dense to discrete time, through the notion of sampling invariance, introduced in previous work [FR06]. Since the reduction is from an undecidable problem to a decidable one, the technique is necessarily incomplete, so it fails to provide conclusive answers on some problem instances. The paper discusses this shortcoming and hints at how it can be mitigated in practice. The verification technique has been implemented on top of a tool for discrete-time bounded validity checking; the paper also reports on in-the-small experiments with the tool, which show some promising results.

@TechReport{FPR07-TR2007-37,
  author = {Carlo A. Furia and Matteo Pradella and Matteo Rossi},
  title = {Dense-Time {MTL} Verification Through Sampling},
  institution = {Dipartimento di Elettronica e Informazione, Politecnico di Milano},
  year = {2007},
  number = {2007.37},
  month = {April}
}

Carlo A. Furia, Dino Mandrioli, Angelo Morzenti, and Matteo Rossi.
Modeling Time in Computing: a taxonomy and a comparative survey.
arXiv.org > cs > 0807.4132, July 2008.
(A preliminary version appeared as Technical Report 2007.22, Dipartimento di Elettronica e Informazione, Politecnico di Milano, January 2007).

Abstract
The increasing relevance of areas such as real-time and embedded systems, pervasive computing, hybrid systems control, and biological and social systems modeling is bringing a growing attention to the temporal aspects of computing, not only in the computer science domain, but also in more traditional fields of engineering.
This article surveys various approaches to the formal modeling and analysis of the temporal features of computer-based systems, with a level of detail that is suitable also for non-specialists. In doing so, it provides a unifying framework, rather than just a comprehensive list of formalisms.
The paper first lays out some key dimensions along which the various formalisms can be evaluated and compared. Then, a significant sample of formalisms for time modeling in computing are presented and discussed according to these dimensions. The adopted perspective is, to some extent, historical, going from "traditional" models and formalisms to more modern ones.

  • The report in PDF format (October 2010 version, with several typos fixed).
@Misc{FMMR08-TR-25072008,
  author = {Carlo A. Furia and Dino Mandrioli and Angelo Morzenti and Matteo Rossi},
  title = {Modeling Time in Computing: a taxonomy and a comparative survey},
  howpublished = {\url{http://arxiv.org/abs/0807.4132}},
  month = {July},
  year = {2008},
  note = {(A preliminary version appeared as Technical Report 2007.22, Dipartimento di Elettronica e Informazione, Politecnico di Milano, January 2007)}
}

Carlo Alberto Furia.
Discrete Meets Continuous, Again.
Technical Report 2006.77.
Dipartimento di Elettronica e Informazione, Politecnico di Milano, December 2006.

Abstract
This report collects some contributions about issues related to the notion of sampling invariance that has been introduced in [FR05,FR06], of which the present report can therefore be considered a follow-up and a complement.
In particular: we compare the expressiveness of the language R-Z-TRIO against that of MTL (and MITL), by also showing a result about the expressiveness of non-strict temporal operators; we characterize behaviors of bounded variability and formulas that preserve the bounded variability requirement; we compare the notion of sampling invariance with that of digitization [HMP92]; we discuss how to describe the runs of a timed automata with R-Z-TRIO formulas; we provide an example of application of the sampling invariance techniques to a simple verification problem; we summarize several related works on the expressiveness, decidability, and complexity of formalisms that are somewhat related to the R-Z-TRIO temporal logic.

@TechReport{Fur06-TR2006-77,
  author = {Carlo Alberto Furia},
  title = {Discrete Meets Continuous, Again},
  institution = {Dipartimento di Elettronica e Informazione, Politecnico di Milano},
  year = {2006},
  number = {2006.77},
  month = {December}
}

Carlo Alberto Furia.
Compositionality Made Up.
Technical Report 2006.76.
Dipartimento di Elettronica e Informazione, Politecnico di Milano, December 2006.

Abstract
This paper develops some methods and techniques for the practical use of compositionality to prove the correctness of modular systems, with reference to the metric temporal logic TRIO. It advocates a "lightweight" approach to compositionality, driven by the specifics of the system under development, where methodology is as important as technical solutions.

@TechReport{Fur06-TR2006-76,
  author = {Carlo Alberto Furia},
  title = {Compositionality Made Up},
  institution = {Dipartimento di Elettronica e Informazione, Politecnico di Milano},
  year = {2006},
  number = {2006.76},
  month = {December}
}

Carlo A. Furia, Matteo Rossi, Elisabeth A. Strunk, Dino Mandrioli, and John C. Knight.
Raising Formal Methods To The Requirements Level.
Technical Report 2006.64.
Dipartimento di Elettronica e Informazione, Politecnico di Milano, November 2006.
Also: Technical Report CS-2006-24, Department of Computer Science, University of Virginia.

Abstract
In contrast with a formal notion of specification, requirements are often considered an informal entity. In a companion paper [SFRKM06], we have proposed a reference model that provides a clear distinction between requirements and specification, a distinction not based on the degree of formality. Using that notion of requirements, this paper shows how requirements as well as specifications can be formalized.
The formalization of requirements allows one to "lift" the well-known practices of formal analysis and verification from the specification/implementation level up to the highest level of abstraction in the development of a software product. In particular, we show how a formal validity argument can be constructed, proving that the formal specification satisfies its formal requirements. These ideas are demonstrated in an illustrative example based on a runway incursion prevention system, which was also partially presented in [SFRKM06]. Although our results and methods are of general applicability, we focus especially on the real-time aspects of the example; in order to support real-time formalization and reasoning, we exploit the ArchiTRIO formal language in a UML-like environment.

@TechReport{FRSMK06-TR2006-64,
  author = {Carlo A. Furia and Matteo Rossi and Elisabeth A. Strunk and Dino Mandrioli and John C. Knight},
  title = {Raising Formal Methods To The Requirements Level},
  institution = {Dipartimento di Elettronica e Informazione, Politecnico di Milano},
  year = {2006},
  number = {2006.64},
  month = {November},
  note = {Also: Technical Report CS-2006-24, Department of Computer Science, University of Virginia}
}

Elisabeth A. Strunk, Carlo A. Furia, Matteo Rossi, John C. Knight, and Dino Mandrioli.
The Engineering Roles of Requirements and Specification.
Technical Report CS-2006-21.
Department of Computer Science, University of Virginia, October 2006.
Also: Technical Report 2006.61, Dipartimento di Elettronica e Informazione, Politecnico di Milano

Abstract
The distinction between requirements and specification is often confused in practice. This obstructs the system validation process, because it is unclear what exactly should be validated, and against what it should be validated. The reference model of Gunter et al. addresses this difficulty by providing a framework within which requirements can be distinguished from specification. It separates world phenomena from machine phenomena. However, it does not explain how the characterization can be used to help assure system validity.
In this paper, we enhance the reference model to account for certain key elements that are necessary to expose and clarify the distinction and the link between requirements and specification. We use the enhanced version to present a more refined picture of validity, where validation has two steps that can be undertaken separately. We use this picture to question whether the "what the system will do, not how it will do it" paradigm is useful in describing how to construct a specification, and propose an alternative. Finally, we present the requirements and specification for an illustrative example based on a runway incursion prevention system, with the ArchiTRIO formal language in a UML-like environment, to show how this might be done in practice.

@TechReport{SFRKM06-TR2006-61,
  author = {Elisabeth A. Strunk and Carlo A. Furia and Matteo Rossi and John C. Knight and Dino Mandrioli},
  title = {The Engineering Roles of Requirements and Specification},
  institution = {Department of Computer Science, University of Virginia},
  year = {2006},
  number = {CS-2006-21},
  month = {October},
  note = {Also: Technical Report 2006.61, Dipartimento di Elettronica e Informazione, Politecnico di Milano}
}

Carlo Alberto Furia.
Quantum Informatics: A Survey.
Technical Report 2006.16.
Dipartimento di Elettronica e Informazione, Politecnico di Milano, January 2006.

Abstract
We survey the emerging field of quantum computing by showing its motivations and founding principles, and by presenting the most significant results in some selected areas. In particular, we outline the topics of quantum information, quantum algorithms, quantum cryptography, quantum finite automata, quantum error correction, and the physical realizations of quantum computing systems.

@TechReport{Fur06-TR2006-16,
  author = {Carlo Alberto Furia},
  title = {Quantum Informatics: A Survey},
  institution = {Dipartimento di Elettronica e Informazione, Politecnico di Milano},
  year = {2006},
  number = {2006.16},
  month = {January}
}

Carlo A. Furia, and Matteo Rossi.
When Discrete Met Continuous: on the integration of discrete- and continuous-time metric temporal logics.
Technical Report 2005.44.
Dipartimento di Elettronica e Informazione, Politecnico di Milano, October 2005.

Abstract
Real-time systems usually encompass parts that are best described by a continuous-time model, such as physical processes under control, together with other components that are more naturally formalized by a discrete-time model, such as digital computing modules. Describing such systems in a unified framework based on metric temporal logic requires to integrate formulas which are interpreted over discrete and continuous time.
In this paper, we tackle this problem with reference to the metric temporal logic TRIO, that admits both a discrete-time and a continuous-time semantics. We identify sufficient conditions for a TRIO specification to be invariant under change of time model from discrete to continuous and vice versa. These conditions basically involve the restriction to a proper subset of the TRIO language (which we call TRIOsi) and a requirement on the finite variability over time of the values of the basic items that constitute the formulas of the specification. A specification which is invariant can then be verified entirely under the simpler discrete-time model, with the results of the verification holding for the continuous-time model as well.
We believe that this approach is general enough to be easily extendible to other temporal logics of comparable expressive power.

@TechReport{FR05-TR2005-44,
  author = {Carlo A. Furia and Matteo Rossi},
  title = {When Discrete Met Continuous: on the integration of discrete- and continuous-time metric temporal logics},
  institution = {Dipartimento di Elettronica e Informazione, Politecnico di Milano},
  year = {2005},
  number = {2005.44},
  month = {October}
}

Carlo Alberto Furia.
A Compositional World: a survey of recent works on compositionality in formal methods.
Technical Report 2005.22.
Dipartimento di Elettronica e Informazione, Politecnico di Milano, March 2005.

Abstract
We survey the most significant literature about compositional techniques for concurrent and real-time system verification. We especially focus on abstract frameworks for rely/guarantee compositionality that handles circularity, but also consider different developments.

@TechReport{Fur05-TR2005-22,
  author = {Carlo Alberto Furia},
  title = {A Compositional World: a survey of recent works on compositionality in formal methods},
  institution = {Dipartimento di Elettronica e Informazione, Politecnico di Milano},
  year = {2005},
  number = {2005.22},
  month = {March}
}

Carlo A. Furia, Dino Mandrioli, Angelo Morzenti, Matteo Pradella, Matteo Rossi, and Pierluigi San Pietro.
Higher-Order TRIO.
Technical Report 2004.28.
Dipartimento di Elettronica e Informazione, Politecnico di Milano, September 2004.

Notes
This paper introduces the basics about a higher-order extension of the first-order metric temporal logic TRIO.

@TechReport{FMMPRS04-TR2004-28,
  author = {Carlo A. Furia and Dino Mandrioli and Angelo Morzenti and Matteo Pradella and Matteo Rossi and Pierluigi {San Pietro}},
  title = {Higher-Order {TRIO}},
  institution = {Dipartimento di Elettronica e Informazione, Politecnico di Milano},
  year = {2004},
  number = {2004.28},
  month = {September}
}