Replies: 2 comments
-
as an example of pixel normals being broken, doing this: vec2 offset = vec2(0.0, _region_texel_size);
float h = get_height(uv2);
float u = get_height(uv2 + offset.yx);
float v = get_height(uv2 + offset.xy);
vec3 normal = vec3(h-u, _mesh_vertex_spacing, h-v);
vec3 w_normal = normalize(normal);
vec3 w_tangent = normalize(cross(w_normal, vec3(0, 0, 1)));
vec3 w_binormal = normalize(cross(w_normal, w_tangent));
NORMAL = mat3(VIEW_MATRIX) * w_normal;
TANGENT = mat3(VIEW_MATRIX) * w_tangent;
BINORMAL = mat3(VIEW_MATRIX) * w_binormal; |
Beta Was this translation helpful? Give feedback.
0 replies
-
#define texelOffset(b, c) ivec3(ivec2(b.xy * _region_size + c -0.4979), int(b.z))
#define textureGather(a, b) vec4( \
texelFetch(a, texelOffset(b, vec2(0,1)), 0).r, \
texelFetch(a, texelOffset(b, vec2(1,1)), 0).r, \
texelFetch(a, texelOffset(b, vec2(1,0)), 0).r, \
texelFetch(a, texelOffset(b, vec2(0,0)), 0).r \
) allows mock use of "textureGather" in compatibility |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
When in Compatability;
Having too large an array of samplers results in major errors, or even crashes.
Attempting to sample the height map multiple times in fragment also causes similar results, unsure why.
Large functions seem to break, outputing annomolous values?
Compatability will require core changes to use array[sampler] rather than samplerArray objects, which may have many benefits, i've yet to attempt implementing that to the extension itself - for now manually assigned textures for testing. Thats for another topic really, upon which this is dependant.
everything else works fine, though I had to re-write the entire material function as it was just not working correctly (potentially related to the use of structs, too many in/outs in the function? unsure. It was an excuse to attempt a different approach to it anyways!)
swapping to array[samplers]:
vertex function is straightforwards
Start of fragment, almost no changes besides directly reading v_normal:
Skip a ton of math, by calculating the auto shader only once, and also interpolating the blend value for even smoother blending!
next, rather than sampling base, over, albdo & normal for all 4 corners seperatley, process the 4 control values and generate an index for all 4 corners containing the textureID integers - include the autoshader over-ride at this step
then iterate over the index, and sample each texture into a array of vec4s. if the same ID is encountered again, it is skipped, saving a texture read, and any duplicate texture uv_scale & texture_color array multiplys along with it.
for dual scaling, whilst going through all 8 values, if the dual scale texture is seen we check the blend factor to potentially skip the near texture read, and also update the bool to show that dual scaling is needed for this fragment. we dont update the sampled array unless the read actually occurs. finally during this step, read the dual scaled texture, +2 reads only during blend.
next we bilerp all base, and over values, useing the index for each corner to access the aldebo/normal arrays that were filled earlier.
the final step is to height blend base and over for the final output.
then just the rest of the fragment function as normal:
while this method is actually the fastest (overall) material method so far, unfortunatley is makes individual scale/angle per domain unworkable.
World noise is unchanged, besides seperating it from get_height() like this:
misc functions if someone wants put all this together
Beta Was this translation helpful? Give feedback.
All reactions