Description
Abstract
While the IR code generator does reuse the IR that the current contract depends on (via new
, .runtimeCode
or .creationCode
) it's only the unoptimized IR that is reused. There is no reuse of optimized IR, which means that we're reoptimizing dependencies (and their dependencies, recursively) from scratch every time.
Motivation
This is likely a significant slowdown for contracts that contain a lot of bytecode dependencies.
Benchmarking is needed to determine how much of an effect it has on compilation times in practice, but we have already established that some popular projects do have a lot of bytecode dependencies (especially when using Foundry test framework) so some attempt at code reuse here is expected to be beneficial.
Details
As can be seen in CompilerStack
, the compiler takes unoptimized IR of all contracts and later passes it into IRGenerator
:
solidity/libsolidity/interface/CompilerStack.cpp
Lines 1519 to 1520 in 8a97fa7
IRGenerator
selects sources of contract's bytecode dependencies and embeds their unoptimized IR as subobjects of the contract being compiled:
Then that object is optimized as a whole, without an attempt at reusing the already optimized IR of other contracts:
solidity/libsolidity/interface/CompilerStack.cpp
Line 1573 in 8a97fa7
Possible solutions
A quick and easy way to address this at Yul IR level would be to modify YulStack::optimize()
to receive optimized IR of other contracts and substitute it whenever it encounters a corresponding subobject.
The downside of this, however, is that it would only address IR reuse. We'd still be doing Yul->EVM transform separately for each subobject. It also breaks encapsulation by having YulStack
assume that the unoptimized subobject really comes from the bytecode dependency and has not been modified between code generation and optimization.
A better approach might be to defer subobject embedding and introduce a linking stage. We could have IRGenerator
generate code only for the current contract and insert the code dependency later. The optimizer already works on each assembly separately, and knowledge about assemblies is abstracted away using builtins like datasize()
/dataoffset()
/datacopy()
so this should be feasible.
The upside of this solution is that linking could be done even at the bytecode level, reusing the results of EVM->Yul transform and maybe even EVM asm optimization. The downside is that the change is more invasive and we also have to prepare compiler for dealing with unlinked (i.e. incomplete) bytecode in most of the pipeline. To avoid showing such incomplete artifacts to the user we'd also need to be prepared to do some rudimentary linking at any stage where output can be requested.
Backwards Compatibility
This should be completely transparent to the users, unless we decide to cut corners, e.g. by outputting unlinked artifacts at intermediate stages.