-
Notifications
You must be signed in to change notification settings - Fork 4
Description
As someone who is very interested in SDL_gpu, both as a user and a contributor, I am concerned that writing an entirely custom shader language spec, compiler, bytecode format, and translation layer is more work than can be accomplished in a reasonable amount of time. Let alone developing the ecosystem around it — IDE plugins, RenderDoc support, tutorials, etc.
Additionally, the whole WebGPU/WGSL debacle pretty clearly demonstrated that most developers do not want Yet Another Shader Language, no matter good it may be. This seems to be the main point of contention for many developers in regards to the SDL_gpu project.
In light of these problems, I have a somewhat drastic proposal that still keeps the “ship one shader binary, run everywhere” / “no big ugly c++ dependencies” spirit of SDL_gpu intact, while heavily reducing the maintenance workload and allowing us to take advantage of existing infrastructure.
I propose that instead of using a custom shader stack, we do the following:
- Use HLSL as the source language
- Compile to DXBC shader model 5/5.1
- At runtime, parse the DXBC and translate it to the target shader language
I’m sure you have a bunch of questions, so I’ve prepped some answers here:
Would we have to use FXC to compile our shaders? That’s not great for non-Windows users, and we can’t bundle that with SDL for dynamic source code compilation.
Not necessarily! As part of the VKD3D project, the Wine folks have written a FOSS d3dcompiler that we can use instead. The project is still relatively young, but it’s at least good enough for FNA’s purposes.
EDIT: Since writing this post, I learned that Clang is adding HLSL support with the intention of adding a DXBC backend in the future!
Why DXBC and not DXIL/SPIRV?
Unlike those intermediate formats, the DXBC spec is not ungodly huge, and it’s no longer changing. If we can translate the finite set of opcodes, we’re good to go.
In fact, there’s already a library that does that exact sort of thing — MojoShader! We can use it as a foundation (or at least, an inspiration) and build on its ideas rather than building something from first principles.
DXBC is officially deprecated. Is that going to be a problem?
Newer HLSL shader models (6.0+) are locked behind DXIL, but that’s totally fine for our purposes. SM5 contains everything we would realistically need (including compute shaders!). Unless we decide we need mesh shaders, or raytracing, or wave intrinsics, I don’t see anything we’d be missing out on.
The tooling for DXBC is definitely not going away either. Even though DX12 has been around for almost 10 years at this point, pretty much every PC game still ships with DX11. Especially since we have VKD3D to protect us from the threat of FXC bitrot, I think we will be in good shape for the foreseeable future.
Does DXBC provide any other advantages beyond reducing the development cost for SDL_gpu, allowing developers to use a familiar shader language, and leveraging the existing HLSL shader ecosystem?
Why yes, I’m glad you asked! There’s actually another huge advantage: DXBC is a real shader format that D3D11 and D3D12 can actually ingest! Meaning, we have a definitive ground truth to test from as we develop new graphics backends! If we’re ever in doubt about whether some shader behavior on Metal/Vulkan/whatever is a bug, we can check against the D3D11/12 implementation and verify. (If we want to be 100% sure we’re not witnessing driver bugs, we can check the software WARP implementation!)
Additionally, this means we could just not translate shaders on Windows! SDL can consume the shader binary and pass it through directly to D3D. Niiice! Of course, there’s nothing stopping us from translating back to HLSL/DXIL if we need to.
The Shader Model 5.0 ISA contains 203 instructions. That’s still awfully complex, isn’t it?
It’s nothing to sneeze at for sure, but a lot of the instructions are variants of each other, or only used by hull/domain/geometry shaders, which I highly doubt we are going to support. I think it’s totally reasonable to start with the SM4 instructions (of which there are 102) since those are more broadly applicable, and then add SM5 instructions as needed.
We also have this parser from VKD3D we can use as a reference if needed.
What happens if a developer tries to use a shader with opcodes that we don’t currently translate?
To ensure that developers write and ship shaders that are compatible with the subset of the ISA that we support, we can easily write up a runtime validation checker for DEBUG builds, which would scan shader bytecode input for any unrecognized opcodes and spit out error information.
Are there legal issues afoot with using a proprietary bytecode?
I sure hope not, because we’ve been shipping DXBC translators in games for many years!
So are you saying DXBC is perfect?
Nope! There are some clear drawbacks with this approach:
- We don’t control the spec, so we are at the mercy of however DXBC happens to work. If the bytecode does something inefficient/awkward/painful, there’s nothing we can do about it.
- FXC is a famously temperamental beast, and VKD3D’s HLSL frontend is still pretty immature. It’s more than likely we’d need to contribute patches to the compiler (which isn’t necessarily a negative, given that it would help improve the general gaming ecosystem :)).
- It’s a non-trivial spec to work with, with a couple hundred opcodes. Miles better than SPIRV and friends, but still…
- It is a dead end for future shader features. Imagine D3D13 comes out with a brand new kind of shader that everyone wants to use. We will never be able to support it with vanilla DXBC. (Theoretically we could fork VKD3D and add in our own custom bytecode, but that’s probably not a good idea.)
However, despite these issues, I still think DXBC is the best existing option we have, and it's worth considering before we dive full-force into writing our own entire stack.