Concepts I'm developing into a custom implementation #483
FishOfTheNorthStar
started this conversation in
Extras
Replies: 1 comment 2 replies
-
Hi, thanks for the update. These sound like interesting projects. Maybe we can mine some of them for inclusion in the upstream. We recently enabled shader includes. I'll be merging a restructuring of the monolithic storage into separate region files tomorrow. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all, been a while, just wanted to say hello and outline some of what I've been working on here. Maybe some of it will be worth a look. It's a continuation / hyper-customization of my PR1 branch.
It has some external dependencies, first off - AsyncLiteDB, for storing relevant data. Also two main groups of additional classes I'm calling Blobs and MaryNodes so far. Mary is a bit of a misnomer, technically they'd be M-Ary nodes, like a spatial database, or a geo-cache maybe. Blobs are what you'd expect: image data in blob format. They're compressed and decompressed when stored, and the three main images used to generate the final terrain textures takes about 20mb for a 16 region terrain. Those images are a bit odd though, I generate three kinds of noise, the first is very broad, gradual shifts and that's the overall world elevation at that point, then another smaller grain noise, full spectrum, and that represents smaller adjustments to the world elevation. Then the third map is a biome map, and is very broad and smooth, also full spectrum, and which biome a position is within is determined by taking the floor of texture value times max_biomes. Then each biome can itself specify it's own auto-texturing base/over materials, as well as a Curve2D texture that represents how the second noise texture above is blended with the first. This is so things like desert steppes and such can have a stair-cased look, or mountains can have areas of gradual slope versus sharper peaks, etc.
The noise that's generated within these textures utilizes a sort of FastNoiseLite markup language I've implemented that's a bit like HTML for noise-gen, and allows various factors to be adjusted in a way that can be hard coded as constant. Currently it happens CPU side and it's a bit slow, taking about 15 seconds to generate 16 regions at full resolution.
Notably, blobs don't need to be at full resolution, some of them do better at some fraction of that. All of them are usable at LOD increments, and they render in the same increments like it'll do a render at 1/8th resolution, it can update the terrain from that, then next pass it'll do it at 1/4 resolution, etc.
Those are the only three images stored to create the terrain. Those are then fed into a series of shaders that off-screen render derived textures, which ultimately become the textures Terrain3D uses.
The MaryNodes are for world-stuff like trees, shrubs, smaller plants, and eventually things like buildings and notable stuff. They are used in various stages of the Blob rendering process to generate things like an elevation adjustment layer, and an explicit splat-map override layer
I've customized my control map data structure a fair bit, it's substantially different now. Notably blend resolution has been dropped from 8 bit to 5, which worked out fine for my purposes. Also I have 3 bits dedicated to how much rain can affect the terrain at that point, so areas under trees can be a bit dryer than fully exposed, and interior ground locations can have rain completely omitted.
Transferring 32b floats from the rendering phase I described back into the terrain engine has proven to be unfortunately rather clunky requiring I send the data as RGBA8 and then reconstruct a 32b float from that per pixel cpu side, which is unfortunate because the rest occurs GPU side considerably faster. But SubViewports flatout refuse to provide 32b output at current. For this reason I'm considering other control map formats like maybe RgF, which the SubViewport can provide at 16b per channel, so conceivably I could generate it entirely GPU side. It would require portioning certain control map parameters into one channel or the other though.
One phase of the blob rendering generates a nice Sobel normal map of each region at control-map resolution. This is then used by other phases of the generation. I'll be doing some testing soon to see if loading this generated normal map into the shader and fore-going all other GPU side normal calculation would improve performance, which I expect it would, but the question is would the addition RAM used and texture lookups involved negate those gains. Testing will tell.
Anyways just wanted to say hi and give a little update. None of the above is reflected in my PR1 branch btw that's still how it was when I last mentioned it here. I'll try to push newer changes to it soon for people to try out, with my custom classes linked in somehow, perhaps as a separate project.
Beta Was this translation helpful? Give feedback.
All reactions