Notes on testing and simulating rendering limits #18973
greeble-dev
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
These are some notes on how Bevy tests and simulates rendering limits in partnership with
wgpu
, and why the current behaviour can be problematic. I don't have any good solutions, so I'm writing up what I know in case someone else can move it forward.Background
All GPUs have limits, and these limits vary across GPU hardware and also OS, API and driver versions.
Exceeding these limits will cause bugs ranging from visual glitches to panics. Users expect Bevy to either stay within the limits automatically, or gracefully report what limit has been exceeded and why.
There are many combinations of hardware and software, so it's hard to predict when an app or a new engine feature will exceed rendering limits. And testing real combinations is expensive in all kinds of ways. So, users and contributors also want to simulate rendering limits that their GPU doesn't actually have.
Problems
wgpu
exposes rendering limits in two ways:Adapter
: The limits of the real GPU/driver/etc.Device
: The limits thatwgpu
will actually enforce. Exceeding device limits will trigger panics or warnings, even if the real GPU's limits are higher. These limits are set by Bevy when it creates the device.Adapters and devices share the same structs -
Limits
andFeatures
. These align with the WebGPU spec.So, Bevy checks limits by looking at the
Device
. And it optionally simulates lower limits by requesting aDevice
with whatever it wants (see env varWGPU_SETTINGS_PRIO
andbevy_render::WgpuSettings::limits
).But there's a catch!
Adapter
exposes another struct calledDownlevelCapabilities
. This has limits that are not part of the WebGPU spec, andwgpu
doesn't give Bevy a way to simulate different limits.This mismatch has caused bugs in the past. In one example, compute shaders were effectively disabled by setting
Limits::max_compute_workgroup_storage_size = 0
, butDownlevelFlags::COMPUTE_SHADERS
was still true. It's likely that more bugs are lurking.So, what to do?
Option 1: Make it a
wgpu
problem?What if Bevy could tell
wgpu
to use a particularDownlevelCapabilities
just likeLimits
andFeatures
? I don't know enough aboutwgpu
to say if this is possible. But seems like it could be a big job given that there's currently 22 separate flags.Option 2: Don't use
DownlevelCapabilities
?At least some checks against
DownlevelCapabilities
can done by looking at the other structs. Maybe there's equivalents for just the features Bevy is using?Here's the flags Bevy currently uses and possible alternatives:
DownlevelFlags::COMPUTE_SHADER
->Limits::max_compute_workgroup_storage_size == 0
.DownlevelFlags::FRAGMENT_WRITABLE_STORAGE
-> maybeLimits::max_storage_buffers_per_shader_stage == 0
?DownlevelFlags::VERTEX_AND_INSTANCE_INDEX_RESPECTS_RESPECTIVE_FIRST_VALUE_IN_INDIRECT_DRAW
-> ???DownlevelFlags::BASE_VERTEX
-> ???Beta Was this translation helpful? Give feedback.
All reactions