Closed
Description
It would be useful if we could execute code on the CPU, both for testing and to extend the usability of this package. Regular execution should be pretty easy:
using GPUCompiler, LLVM
## runtime implementation
module NativeRuntime
# FIXME: actually implement these
signal_exception() = return
malloc(sz) = return
report_oom(sz) = return
report_exception(ex) = return
report_exception_name(ex) = return
report_exception_frame(idx, func, file, line) = return
end
## target
struct NativeCompilerTarget <: AbstractCompilerTarget
end
GPUCompiler.runtime_module(::NativeCompilerTarget) = NativeRuntime
GPUCompiler.llvm_triple(::NativeCompilerTarget) = Sys.MACHINE
## job
struct NativeCompilerJob <: AbstractCompilerJob
target::NativeCompilerTarget
source::FunctionSpec
end
Base.similar(job::NativeCompilerJob, source::FunctionSpec) =
NativeCompilerJob(job.target, source)
GPUCompiler.target(job::NativeCompilerJob) = job.target
GPUCompiler.source(job::NativeCompilerJob) = job.source
GPUCompiler.runtime_slug(::AbstractCompilerJob) = "native"
## main
function kernel()
end
function run(mod::LLVM.Module, entry::LLVM.Function)
res_jl = 0.0
LLVM.JIT(mod) do engine
f = LLVM.functions(engine)[LLVM.name(entry)]
res = LLVM.run(engine, f)
LLVM.dispose(res)
end
return
end
function main()
target = NativeCompilerTarget()
source = FunctionSpec(kernel)
job = NativeCompilerJob(target,source)
mod, entry = GPUCompiler.compile(:llvm, job)
run(mod, entry)
end
Left to implement is the runtime, we could print by e.g. linking the C runtime and calling printf
. However, it would be vastly more useful if we could actually reuse the full Julia runtime. This should be possible with the LLVM ORC JIT, which supports looking external functions and globals. https://www.doof.me.uk/2017/05/11/using-orc-with-llvms-c-api/