SAS99: Abstracts of Accepted Papers


A New Class of Functions for Abstract Interpretation
by Jörg Köller, Markus Mohnen

Abstract:
In the context of denotational style abstract interpretation it is crucial to have an efficient fixed point solver. In general, however, finding a fixed point requires exponential time. One approach to improving the efficiency is the use of special classes of functions. A well-known example for such a class are additive functions, which allow the reduction to quadratic runtime.
In this paper, we demonstrate that additive functions are not suited in a higher-order context. To overcome this deficiency, we introduce the class of component-wise additive functions, which are an extension of the class of additive functions.
Component-wise additive functions allow us to solve many higher-order equation systems in polynomial time. We stress the usefulness of our class by presenting a package for implementing abstract interpretations using our theoretical results. Furthermore, experimental results taken in a case study for escape analysis are presented to relate our approach to other approaches.


Translating Out of Static Single Assignment Form
by Vugranam C. Sreedhar, Roy Dz-ching Ju, David M. Gillies, Vatsa Santhanam

Abstract:
Programs represented in Static Single Assignment (SSA) form contain phi instructions (or functions) whose operational semantics are to merge values coming from distinct control flow paths. However, translating phi instructions into native instructions is non-trivial when transformations such as copy propagation and code motion have been performed. In this paper we present a new framework for translating out of SSA form. We introduce the notion of the phi congruence class to facilitate the elimination of phi instructions. By appropriately placing copy instructions, we ensure that none of the resources in a phi congruence class interfere. Within our framework, we propose three methods for copy placement. The first method pessimistically places copies for all operands of phi instructions. The second method uses an interference graph to guide copy placement. The third method uses both data flow liveness sets and an interference graph to guide copy placement. We also present a new SSA-based coalescing method that can selectively remove redundant copy instructions with interfering operands. Our experimental results indicate that the third method results in 35% fewer copy instructions than the second method. Compared to the first method, the third method, on average, inserts 89.88% fewer copies during copy placement and runs 15% faster, which are significant reductions in compilation time and space.


State Space Reduction based on Live Variables Analysis
by Marius Bozga, Jean-Claude Fernandez

Abstract:
The intrinsic complexity of most protocol specifications in particular, and of asynchronous systems in general, lead us to study combinations of static analysis with classical model-checking techniques as a way to enhance the performances of automated validation tools.
The goal of this paper is to point out that an equivalence on our model derived from the information on live variables is stronger than the strong bisimulation. This equivalence, further called live bisimulation, exploits the unused dead values stored either in variables or in queue contents and allow to simplify the state space with a rather important factor. Furthermore, this reduction comes almost for free and is always possible to directly generate the quotient model without generating the initial one.


Binding-Time Analysis for Both Static and Dynamic Expressions
by Kenichi Asai

Abstract:
This paper presents a specializer and a binding-time analyzer for a functional language where expressions are allowed to be used as both static and dynamic. With both static and dynamic expressions, we can statically access data structures while residualizing them at the same time. Previously, such data structures were treated as completely dynamic, which prevented us from accessing their components statically. The technique presented in this paper effectively allows us to lift data structures which was prohibited in the conventional partial evaluators. The binding-time analysis is formalized as a type system and the solution is obtained by solving constraints generated by the type system. We prove the correctness of the constraint solving algorithm and show that the algorithm runs efficiently in almost linear time.


Abstract Interpretation of Mobile Ambients
by Rene Rydhof Hansen, Jacob Grydholt Jensen, Flemming Nielson, Hanne Riis Nielson

Abstract:
We demonstrate that abstract interpretation is useful for analysing calculi of computation such as the Mobile Ambients calculus (which is based on the pi-calculus); more importantly, we show that the entire development can be expressed in a constraint-based formalism that is becoming exceedingly popular for the analysis of functional and object-oriented languages.
The first step of the development is the collecting semantics (for which we show semantic correctness and that solutions constitute a Moore family); the second step is an analysis for counting occurrences of processes inside other processes and this analysis is induced from the collecting semantics (and its properties are derived from those of the collecting semantics using general results); the third step is a previously developed control flow analysis that we show how to induce from the counting analysis (thereby once more deriving its properties using general results).


Equational Semantics
by Loic Correnson, Etienne Duris, Didier Parigot, Gilles Roussel

Abstract:
In the context of functional programming, semantic methods are commonly used to drive program transformations. However, classical semantic domains often rely on recursive objects which embed the control flow of recursive functions. As a consequence, transformations which have to modify the control flow are difficult to define. We propose in this paper a new semantic domain where the control flow is defined implicitly, and thus can be modified. This new theoretical and practical framework allows to homogeneously define and extend powerful transformations related to partial evaluation and deforestation.
Keywords: semantics, program transformation, partial evaluation, deforestation.


Partial Dead Code Elimination using Extended Value Graph
by Munehiro Takimoto, Kenichi Harada

Abstract:
This paper presents an efficient and effective code generation algorithm for eliminating partially dead assignments, which become redundant on execution of specific program paths. It is one of the most aggressive compiling techniques, including invariant code motion from loop bodies. Since the traditional techniques proposed to this optimization would produce the second-order effects such as sinking-sinking effects, they should be repeatedly applied to eliminate dead code completely, paying higher computation cost. Furthermore, there is a restriction that assignmets sunk to a join point, which is a program point with more than one predecessor, must be lexically identical.
Our technique proposed here can eliminate possibly more dead assignments without the restriction at join points, using an explicit representation of data dependence relations within a program in a form of SSA (Static Single Assignment). Such representation called Extended Value Graph (EVG), shows the computationally equivalent structure among assignments before and after moving them on the control flow graph. We can get the final result directly by once application of this technique, because it can capture the second-order effects as the main effects, based on EVG.


Abstracting Cryptographic Protocols with Tree Automata
by David Monniaux

Abstract:
Cryptographic protocols have so far been analyzed for the most part by means of testing (which does not yield proofs of secrecy) and theorem proving (costly). We propose a new approach, based on the abstract domain of regular tree languages. While the abstraction we use seems fine-grained enough to be able to certify some protocols, there are still complexity issues to be solved. Both the formal semantics and implementation issues are discussed in the paper.


Freeness Analysis Through Linear Refinement
by Patricia Hill, Fausto Spoto

Abstract:
Linear refinement is a technique that allows the systematic construction of abstract domains for program analysis directly from a basic domain representing just the property of interest. This paper for the first time uses linear refinement to construct a new domain instead of just reconstructing an existing one. This domain is designed for definite freeness analysis and consists of sets of dependencies between sets of variables. We provide explicit definitions of the operations for this domain which we show to be safe with respect to the concrete operations. We illustrate how the domain may be used in a practical analysis by means of a small example.


Decomposing Non-Redundant Sharing by Complementation
by Enea Zaffanella, Patricia M. Hill, Roberto Bagnara

Abstract:
Complementation, the inverse of the reduced product operation, is a relatively new technique for systematically finding minimal decompositions of abstract domains. File' and Ranzato advanced the state of the art by introducing a simple method for computing a complement. As an application, they considered the extraction by complementation of the pair-sharing domain PS from the Jacobs and Langen's set-sharing domain SH. However, since the result of this operation was still SH, they concluded that PS was too abstract for this. Here, we show that the source of this difficulty lies not with PS but with SH and, more precisely, with the redundant information contained in SH with respect to ground-dependencies and pair-sharing. In fact, the difficulties vanish if our non-redundant version of SH, SH^\rho, is substituted for SH. To establish the results for SH^\rho, we define a general schema for subdomains of SH that includes SH^\rho and Def as special cases. This sheds new light on the structure of SH^\rho and exposes a natural though unexpected connection between Def and SH^\rho. Moreover, we substantiate the claim that complementation alone is not sufficient to obtain truly minimal decompositions of domains. The right solution to this problem is to first remove redundancies by computing the quotient of the domain with respect to the observable behavior, and only then decompose it by complementation.


Binary Decision Graphs
by Laurent Mauborgne

Abstract:
Binary Decision Graphs are an extension of Binary Decision Diagrams that can represent some infinite boolean functions. Three classes of infinite functions of increasing complexity are presented. The first one is closed by intersection and union, the second one by intersection, and the last one by all boolean operations. The first two classes give rise to a canonical representation, which, when restricted to finite functions, are the classical BDDs. The paper also gives new insights on the notion of variable names and the possibility of sharing variable names that can be of interest in the case of finite functions.


Eliminating Unnecessary Synchronization from Java Programs
by Jonathan Aldrich, Craig Chambers, Emin Gun Sirer, Susan Eggers

Abstract:
This paper presents and evaluates a set of analyses designed to reduce synchronization overhead in Java programs. Monitor-based synchronization in Java programs often incurs a large overhead, accounting for up to 39% of total execution time in our benchmark applications. To reduce this overhead, programmers often try to eliminate unnecessary lock operations by hand. Such manual optimizations are tedious, error-prone, and often result in poorly structured programs. In this paper, we describe a family of static analyses that automatically find and remove unnecessary synchronization from Java programs. Our analyses optimize cases where a monitor is entered multiple times by a single thread, where one monitor is nested within another, and where a monitor is accessible by only one thread. A prototype implementation of a subset of our analyses improves performance by up to 5% on a set of hand-optimized benchmarks. Thus, our automated analyses have the potential to significantly improve the performance of Java applications while enabling programmers to design simpler and more reusable multithreaded code.


Dynamic partitionning in numerical analysis
by Bertrand Jeannet, Nicolas Halbwachs, Pascal Raymond

Abstract:
We apply linear relation analysis to the verification of declarative synchronous programs. In this approach, state partitionning plays an important role: on one hand the precision of the results highly depends on the fineness of the partitionning; on the other hand, a too much detailed partitionning may result in an exponential explosion of the analysis. In this paper, we propose to dynamically select a suitable partitionning according to the property to be proved.


Timed polyhedra analysis for synchronous languages
by Frédéric Besson, Thomas Jensen, Jean-Pierre Talpin

Abstract:
We define an operational semantics for the Signal language and design an analysis which allows to verify properties pertaining to the relation between values of the numeric and boolean variables of a reactive system. A distinguished feature of the analysis is that it works on the source program directly without having to build an intermediate representation of the semantics of the program. The analysis calculates a safe approximation to the set of reachable states by a symbolic fixed-point computation in the domain of convex polyhedra using a novel widening operator based on the convex hull representation of polyhedra.


Abstract animator for temporal specifcations: Application to TLA
by Dominique Cansell,Dominique Mery

Abstract:
In this paper, we explain how we use abstract interpretation for analysing temporal specifications in TLA+. An analysis is obtained by building a predicate behavior which checks the specification. Abstract interpretation allows us to transit from a concrete world to an abstract world (generally finite). Using abstract interpretation, we build abstract predicate behaviors and, in general, if the abstract interpretation is sufficiently powerful and expressive, we can build a finite graph of abstract predicates to analyse a temporal specification. TLA/TLA+ is based on an untyped framework, namely the ZF set theory and we show how abstract interpretation fits the requirements of untyping and makes easier the analysis of temporal specifications.


A Formal Study of Slicing for Multi-threaded Programs with JVM Concurrency Primitives
by John Hatcliff, James Corbett, Matthew Dwyer, Stefan Sokolowski, Hongjun Zheng

Abstract:
Previous work has shown that program slicing can be a useful step in model-checking software systems. We are interested in applying these techniques to construct models of multi-threaded Java programs. The slicing literature contains some work on slicing multi-threaded programs, but this past work does not address the concurrency primitives found in Java, nor does it provide the rigorous notions of slice correctness that are necessary for reasoning about programs with non-deterministic behaviour and potentially infinite computation traces.
In this paper, we define the semantics of a simple multi-threaded language with concurrency primitives matching those found in the Java Virtual Machine. Using this semantics, we propose a bisimulation-based notion of correctness for slicing in this setting. Building on existing notions of dependency in the slicing literature, we identify additional dependencies that are relevant for slicing multi-threaded Java programs, and we use these dependencies to specify a program slicer for the language presented in the paper. Finally, we discuss how these dependencies can be refined to take into account common programming idioms of concurrent Java software.


Detecting Equalities of Variables: Combining Efficiency with Precision
by Oliver Ruething, Jens Knoop, Bernhard Steffen

Abstract:
Detecting whether different variables produce the same value at a program point is generally undecidable. Therefore, the detection of equalities is usually restricted to those that are independent from the interpretation of operators (Herbrand-equivalences). Furthermore, the most prominent technique known as value numbering that is implemented in many compilers is restricted to basic blocks.
Globalizations of value numbering come in two flavours: approaches based on the work of Kildall use abstract interpretation to gather information on value equalities among variables. These approaches are complete in detecting Herbrand-equivalences but also expensive in terms of computational complexity. On the other side, there are algorithms like the one of Alpern, Wegman and Zadeck that do not fully interpret the control flow.These approaches are particularly efficient but significantly less precise than their Kildall-like counterparts. In this paper we are going to discuss how to combine the best features of both algorithms by trading computational complexity against precision.
We propose an extension to the algorithm of Alpern,Wegman and Zadeck that is almost as efficient as the original one while,for acyclic control flow, being as precise as Kildall's one, \ie detects all Herbrand-equivalences. We conjecture that this result also holds for arbitrary control flow. This would provide the first polynomial time algorithm for the detection of Herbrand-equivalences.


Eliminating Dead Code on Recursive Data
by Yanhong A. Liu, Scott D. Stoller

Abstract:
This paper describes a general and powerful method for dead code analysis and elimination in the presence of recursive data constructions. We represent partially dead recursive data using liveness patterns based on general regular tree grammars extended with the notion of live and dead, and we formulate the analysis as computing liveness patterns at all program points based on program semantics. This analysis yields a most precise liveness pattern for the data at each program point, which is significantly more precise than results from previous methods. The analysis algorithm takes cubic time in terms of the size of the program in the worst case but is very efficient in practice, as shown by our prototype implementation. The analysis results are used to identify and eliminate dead code. The general framework for representing and analyzing properties of recursive data structures using general regular tree grammars applies to other analyses as well.