Introduction
The C programming-language draws a line between compile-time and run-time: you compile your program to convert it into machine-code, which can then be run. C++ crosses this line via template-metaprogramming techniques that use the compiler to “run” meta-programs. Dynamic programming languages (e.g., Python) cross the line in the other direction by allowing runtime generation of functions (“higher level functions”).
The following is a proof-of-concept showing that higher-level functions can be implemented in C++ using the LLVM Compiler Infrastructure, and how they can be useful for runtime code optimization.
Background
Static Code Optimization
A C++ compiler can do quite sophisticated code-optimizations. However, how much optimization can be done depends on what the compiler knows of the code.
For example, one trivial optimization is “strength reduction”. Generally, when faced with a code such as:
int divide(int x, int z) {
return x/z;
}
the compiler doesn’t have a choice, and will use an expensive division instruction. However, if more is known on x
and z
, there are many additional optimizations that can be performed. For example, if we know z
at compile time, we can embed the known value in a function (or specialize a template function):
int divide8(int x) {
return x/8;
}
Now, the compiler can optimize the divide away, replacing it with a cheap right shift:
int divide8_opt(int x) {
return x >> 3;
}
This and similar optimizations can increase the application performance significantly.
Target Specific Optimization
Additional optimizations depend on the target machine: if you know in advance the application will run on a Core 2 machine, you can use SSE 4.1 instructions to implement additional optimizations. However, in most cases, applications are expected to run on a wide variety of machines, and a “least-common-denominator” approach is used, emitting Pentium-compatible executables, losing potential performance gains.
Dynamic Code Optimization
If we could defer part of the compilation process to runtime, more optimization opportunities are available.
First, we could generate code optimized for the actual machine the application is running on – use extensions such as SSE4.1 only when running on a Core 2 machine.
Second, we could create code that depends on values not available at compile time. For example, a packet filter may scan network traffic for specific patterns which are only known at runtime. However, these patterns are relatively static, and change infrequently (see this for an example).
The LLVM Compiler Infrastructure
LLVM is a high-quality, platform independent infrastructure for compilation. It supports compilation to an intermediate “bitcode”, and an efficient Just-In-Time (JIT) compilation and execution. For a good introduction to both LLVM and writing compilers, see the excellent LLVM tutorials. LLVM provides tools for generating bitcode, which it can then optimize, compile, and execute.
Using the Code
DynCompiler
DynCompiler is a proof-of-concept (read: don’t expect it to work for more than toy problems, have a nice syntax, or do anything useful) of a DSEL for dynamic code compilation, which uses LLVM to do the real work. With DynCompiler, you can create a “higher-level function” – or a function for creating other functions.
For example, the following function can create a specific quadratic polynomial (ax2+bx+c) for given coefficients a, b, and c:
typedef int (*FType)(int);
FType build_quad(int a, int b, int c) {
DEF_FUNC(quad) RETURNS_INT
ARG_INT(x);
BEGIN
DEF_INT(tmp);
tmp = a*x*x + b*x + c;
RETURN(tmp);
END
return (FType)FPtr;
}
Note that build_quad()
returns a function – it is not the quadratic function itself (the same way that function templates are not “concrete” functions). To create an actual function:
FType f1 = build_quad(1, 2, 1);
Which can now be used as any other function:
for(int x = 0; x < 10; x++) {
std::cout << "f1(" << x << ") = " << f1(x) << std::endl;
}
Syntax
DynCompiler has an ugly syntax – the result of the limitations of the preprocessor and laziness. A function generator has a name and a return type (only “int
” and “double
” are supported):
DEF_FUNC(name) RETURNS_INT
or:
DEF_FUNC(name) RETURNS_DOUBLE
for a function returning a double
. Arguments for the resulting function are provided by:
ARG_INT(x);
or
ARG_DOUBLE(x);
The actual function code starts with a BEGIN
:
BEGIN
Local variables can be defined with DEF_INT
and DEF_DOUBLE
:
DEF_INT(tmp);
You can then use these variables (almost) normally:
tmp = a*x+b;
Note that the code is not evaluated at this point except for “normal” C++ variables like a
and b
. Therefore, at the time this line executes a
= 3, and b = 2, and the above code will be evaluated as:
tmp = 3*x+2;
Note that unused variables, or variables used before initialization, will generate an error. Returning a value from the function is done with:
RETURN_INT(expr);
or
RETURN_DOUBLE(expr);
Note that a function must return a value. The function block ends with:
END
The basic control flow is provided by IF
and WHILE
:
IF(x > 0)
IF(y > 0)
z = x*y;
IFEND
ELSE
z = 0
IFEND
WHILE(z > 0)
z -= x;
WHILEND
In addition, a PRINT(expr)
can be used to print to standard output:
PRINT(i);
Finally, after END
, FPtr
will point to the newly created function. You will need to cast the pointer to the actual function type though:
f1 = (FType)FPtr;
Running the Code
You will need to download and build LLVM (check this link for Visual Studio specific instructions). The DynCompiler code itself requires TR1 support in addition – e.g., Visual Studio 2008 with SP1.
Status
DynCompiler is a proof-of-concept, and should not be taken seriously.