Description
This may be a bit much to ask (in which case no worries, I totally understand), but I figure I'll ask anyways; I think it would be quite useful to have an example of rendering text, specifically one that first renders the text to a canvas and then renders that with WebGPU via a texture.
A more complicated but also perhaps more useful example would render some chars to a canvas, create a texture from it, and then use that texture to construct various strings on-the-fly by sampling from different parts of the texture for each letter. (Note that a WebGL version of this is described in https://webglfundamentals.org/webgl/lessons/webgl-text-glyphs.html, which might be helpful.)
I'm happy to help with the implementation of this if wanted and as I'm able, but unfortunately I think my knowledge is too lacking in both WebGPU and graphics programming in general to do the full PR myself.