# Graal How to to write fast interpreters on the JVM? --- # Resources - Awesome summary: http://chrisseaton.com/truffleruby/jokerconf17/ - Publications: https://github.com/oracle/graal/blob/master/docs/Publications.md --- # Initial objectives - Java instead of C++ for the JIT Compiler - Self optimized code - Not having to deal with the memory --- # How does Graal work? Interface ```java interface JVMCICompiler { byte[] compileMethod(byte[] bytecode); } ``` Implementation ```java class GraalCompiler implements JVMCICompiler { void compileMethod(byte[] request) { // byte code HotSpot.installCode(...); // machine code } } ``` --- # Modular JIT ```bash $ java -XX:+EnableJVMCI -XX:+UseJVMCICompiler \ --module-path=my-module-path my_program.jar ``` Domain-Specific JIT Optimization loaded modularly. --- # The Graal Graph Graal is based on a "sea of nodes" connected by Data Flow (blue) and Control Flow (red) annotations.
--- # Truffle ## Self-Optimizing AST Interpreters Classic JVM base Language works by emitting JVM bytecode. Truffle works by providing a framework for interpreters definitions. Such interpreters can be *partially evaluated* using truffle to be instantiated as a Graal graph. --- # Graph properties - Annotated with runtime information - Specialized for JIT compilation - **Tree rewriting** --- # Truffle concepts Every truffle concepts is defined in java and can be manipulated using existing java tools - nodes - interpreter - language metadata - stack frames --- # Truffle provides .left-column[## Operation specialization A single operation can be specialized for its operands types ## Type decision chain Specialized operations are called optimistically and can be inlined ] .right-column[## Other - Boxing - Specialization by return type - Local variable specialization - Field specialization - Method inlining ]