Description
I'm on vacation till Wednesday, but I figured I should ask this question now while it's fresh on my mind:
I'm interested in getting Rust support as soon as possible for a few architectures with the new gcc
codegen; SuperH would be the highest priority for me. What current obstacles exist for me to try out cross compiling to bare-metal sh2
/j2
, and reporting back (and maybe even fixing, if I can get a grasp on RTL!) any issues I can find. What obstacles exist for attempting to test sh2
codegen using rustc
? I can think of a few, but would love to know more:
- I understand that distributing
libgccjit.so
is more complicated than the LLVM backend because eachlibgccjit.so
can only target one arch. Has there been movement on how all the differentlibgccjit.so
s should coexist?- I don't remember where, but I heard
libgccjit.so
is difficult to use in a cross-compile setting; how true is that?
- I don't remember where, but I heard
- I'm not sure if
libcore
is tested for any arch wherelibstd
doesn't exist in Rust proper, let alone forgcc
codegen backends. Is there a way to run thelibcore
tests inqemu
or thegdb
simulators? Testinglibcore
would be a good litmus test for bare-metalsh2
/j2
1. - Upstream
rustc
would have to have a target registered forsh2
/j2
, and would somehow need to know that only thegcc
codegen backend is supported for now (LLVM backend might come later, but it's not a high priority). Does this "choose a backend" logic already exist in upstream Rust? Is there additional ABI work that needs to be done to addsh2
upstream?
I assume 2. and 3. need to be handled first before I can start actual codegen for "archs which only have a gcc port". I'd be interested in getting the ball rolling there (at the very least testing, or contributing code depending on how much time I can allocate to this).
Footnotes
- Right now the common Linux ABI seems to be MMU-less
fdpic
, and I have no idea whetherrustc
meaningfully supports anything like it.