English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Escape Analysis for Java (TM) : Theory and Practice

MPS-Authors
/persons/resource/persons44141

Blanchet,  Bruno
Static Analysis, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Blanchet, B. (2003). Escape Analysis for Java (TM): Theory and Practice. ACM Transactions on Programming Languages and Systems, 25(6), 713-775.


Cite as: https://hdl.handle.net/11858/00-001M-0000-000F-2EA0-5
Abstract
Escape analysis is a static analysis that determines whether the lifetime of data may exceed its static scope. This paper first presents the design and correctness proof of an escape analysis for Java\textsuperscript{TM}. This analysis is interprocedural, context sensitive, and as flow sensitive as the static single assignment form. So, assignments to object fields are analyzed in a flow-insensitive manner. Since Java is an imperative language, the effect of assignments must be precisely determined. This goal is achieved thanks to our technique using two interdependent analyses, one forward, one backward. We introduce a new method to prove the correctness of this analysis, using aliases as an intermediate step. We use integers to represent the escaping parts of values, which leads to a fast and precise analysis. Our implementation, which applies to the whole Java language, is then presented. Escape analysis is applied to stack allocation and synchronization elimination. In our benchmarks, we stack allocate 13\% to 95\% of data, eliminate more than 20\% of synchronizations on most programs (94\% and 99\% on two examples) and get up to 43\% runtime decrease (21\% on average). Our detailed experimental study on large programs shows that the improvement comes more from the decrease of the garbage collection and allocation times than from improvements on data locality, contrary to what happened for ML. This comes from the difference in the garbage collectors.