I created a tiered just-in-time (JIT) compiler that converts Modelverse bytecode graphs to Python code.
                      Some highlights:
                      
                        - The JIT is implemented as a new Modelverse kernel that serves as a drop-in replacement for the reference Modelverse kernel.
- A classic whole-function compilation strategy is used.
- To execute mutable functions, the new kernel can fall back to the reference interpreter.
- The JIT consists of three tiers: a fast bytecode interpreter, a baseline JIT compiler and an optimizing JIT compiler that generates fast code.
- The new kernel tries to pick tiers for functions in a way that minimizes the sum of function run-times and compile-times.
                            Initially, a heuristic is used to pick a tier for each function.
                            Repeated calls to a given function may then prompt the kernel to re-compile it with a tier that generates faster code.
- The JIT kernel is approximately 37 times faster than the reference kernel for typical Modelverse workloads.
                      The JIT has been merged into the master branch of the 
Modelverse repository! I also wrote a 
report on the JIT.
                      To inspect the version of the JIT on which my report is based, visit the 
'jit' branch of 
my Modelverse fork.