Skip to content

Initial implementation for the new planar reflection filtering #337

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 21 commits into from
Jun 9, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
418c0e1
Initial implementation for the new planar reflection filtering
anisunity Apr 30, 2020
6983a4c
Small improvement and "proper" support of oblique projection
anisunity May 4, 2020
2e10eee
quality improvement to the filtering.
anisunity May 15, 2020
d002f11
Merge branch 'HDRP/staging' into HDRP/PlanarReflectionFilter
sebastienlagarde May 26, 2020
ef0dcf1
Update PlanarReflectionFiltering.compute
anisunity May 27, 2020
d586534
review corrections
anisunity Jun 5, 2020
af5f9fb
Merge branch 'HDRP/staging' into HDRP/PlanarReflectionFilter
sebastienlagarde Jun 6, 2020
e35672e
Merge branch 'HDRP/staging' into HDRP/PlanarReflectionFilter
sebastienlagarde Jun 6, 2020
ba941a4
Update planar filter for all material (was not replaced) + update scr…
sebastienlagarde Jun 6, 2020
36b3f92
Update IBLFilterGGX.cs
sebastienlagarde Jun 6, 2020
6acb8b8
Merge branch 'HDRP/staging' into HDRP/PlanarReflectionFilter
sebastienlagarde Jun 6, 2020
0fe2845
fix shader warning on vulkan
sebastienlagarde Jun 6, 2020
9bd5357
Merge branch 'HDRP/staging' into HDRP/PlanarReflectionFilter
sebastienlagarde Jun 7, 2020
b69db03
update references screenshots
sebastienlagarde Jun 7, 2020
45c7ba5
Fixes for the plane normal and number of mips to be computed
anisunity Jun 8, 2020
004ec94
Fix shift that was to the right in the blurred version
anisunity Jun 8, 2020
4ba8c69
update references screenshots
sebastienlagarde Jun 8, 2020
ac2b0df
fix shader warning
sebastienlagarde Jun 8, 2020
9b23459
Some cleanup
sebastienlagarde Jun 8, 2020
3182935
change to fast Atan
sebastienlagarde Jun 9, 2020
235a70c
Merge branch 'HDRP/staging' into HDRP/PlanarReflectionFilter
sebastienlagarde Jun 9, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
17 changes: 17 additions & 0 deletions com.unity.render-pipelines.core/ShaderLibrary/GeometricTools.hlsl
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,23 @@ float3 IntersectRayPlane(float3 rayOrigin, float3 rayDirection, float3 planeOrig
return rayOrigin + rayDirection * dist;
}

// Same as above but return intersection distance and true / false if the ray hit/miss
bool IntersectRayPlane(float3 rayOrigin, float3 rayDirection, float3 planePosition, float3 planeNormal, out float t)
{
bool res = false;
t = -1.0;

float denom = dot(planeNormal, rayDirection);
if (abs(denom) > 1e-5)
{
float3 d = planePosition - rayOrigin;
t = dot(d, planeNormal) / denom;
res = (t >= 0);
}

return res;
}

// Can support cones with an elliptic base: pre-scale 'coneAxisX' and 'coneAxisY' by (h/r_x) and (h/r_y).
// Returns parametric distances 'tEntr' and 'tExit' along the ray,
// subject to constraints 'tMin' and 'tMax'.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,12 +32,6 @@ real PerceptualRoughnessToMipmapLevel(real perceptualRoughness)
return PerceptualRoughnessToMipmapLevel(perceptualRoughness, UNITY_SPECCUBE_LOD_STEPS);
}

// Mapping for convolved Texture2D, this is an empirical remapping to match GGX version of cubemap convolution
real PlanarPerceptualRoughnessToMipmapLevel(real perceptualRoughness, uint mipMapcount)
{
return PositivePow(perceptualRoughness, 0.8) * uint(max(mipMapcount - 1, 0));
}

// The *accurate* version of the non-linear remapping. It works by
// approximating the cone of the specular lobe, and then computing the MIP map level
// which (approximately) covers the footprint of the lobe with a single texel.
Expand Down
1 change: 1 addition & 0 deletions com.unity.render-pipelines.high-definition/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -788,6 +788,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
- DXR: Only read the geometric attributes that are required using the share pass info and shader graph defines.
- DXR: Dispatch binned rays in 1D instead of 2D.
- Lit and LayeredLit tessellation cross lod fade don't used dithering anymore between LOD but fade the tessellation height instead. Allow a smoother transition
- Changed the way planar reflections are filtered in order to be a bit more "physically based".

## [7.1.1] - 2019-09-05

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1788,7 +1788,44 @@ internal bool GetEnvLightData(CommandBuffer cmd, HDCamera hdCamera, in Processed
&& !hdCamera.frameSettings.IsEnabled(FrameSettingsField.PlanarProbe))
break;

var scaleOffset = m_TextureCaches.reflectionPlanarProbeCache.FetchSlice(cmd, probe.texture, out int fetchIndex);
// Grab the render data that was used to render the probe
var renderData = planarProbe.renderData;
// Grab the world to camera matrix of the capture camera
var worldToCameraRHSMatrix = renderData.worldToCameraRHS;
// Grab the projection matrix that was used to render
var projectionMatrix = renderData.projectionMatrix;
// Build an alternative matrix for projection that is not oblique
var projectionMatrixNonOblique = Matrix4x4.Perspective(renderData.fieldOfView, probe.texture.width / probe.texture.height, probe.settings.cameraSettings.frustum.nearClipPlaneRaw, probe.settings.cameraSettings.frustum.farClipPlane);

// Convert the projection matrices to their GPU version
var gpuProj = GL.GetGPUProjectionMatrix(projectionMatrix, true);
var gpuProjNonOblique = GL.GetGPUProjectionMatrix(projectionMatrixNonOblique, true);

// Build the oblique and non oblique view projection matrices
var vp = gpuProj * worldToCameraRHSMatrix;
var vpNonOblique = gpuProjNonOblique * worldToCameraRHSMatrix;

// We need to collect the set of parameters required for the filtering
IBLFilterBSDF.PlanarTextureFilteringParameters planarTextureFilteringParameters = new IBLFilterBSDF.PlanarTextureFilteringParameters();
planarTextureFilteringParameters.probeNormal = Vector3.Normalize(hdCamera.camera.transform.position - renderData.capturePosition);
planarTextureFilteringParameters.probePosition = probe.gameObject.transform.position;
planarTextureFilteringParameters.captureCameraDepthBuffer = planarProbe.realtimeDepthTexture;
planarTextureFilteringParameters.captureCameraScreenSize = new Vector4(probe.texture.width, probe.texture.height, 1.0f / probe.texture.width, 1.0f / probe.texture.height);
planarTextureFilteringParameters.captureCameraIVP = vp.inverse;
planarTextureFilteringParameters.captureCameraIVP_NonOblique = vpNonOblique.inverse;
planarTextureFilteringParameters.captureCameraVP_NonOblique = vpNonOblique;
planarTextureFilteringParameters.captureCameraPosition = renderData.capturePosition;
planarTextureFilteringParameters.captureFOV = renderData.fieldOfView;
planarTextureFilteringParameters.captureNearPlane = probe.settings.cameraSettings.frustum.nearClipPlaneRaw;
planarTextureFilteringParameters.captureFarPlane = probe.settings.cameraSettings.frustum.farClipPlane;

// Fetch the slice and do the filtering
var scaleOffset = m_TextureCaches.reflectionPlanarProbeCache.FetchSlice(cmd, probe.texture, ref planarTextureFilteringParameters, out int fetchIndex);

// We don't need to provide the capture position
// It is already encoded in the 'worldToCameraRHSMatrix'
capturePosition = Vector3.zero;

// Indices start at 1, because -0 == 0, we can know from the bit sign which cache to use
envIndex = scaleOffset == Vector4.zero ? int.MinValue : -(fetchIndex + 1);

Expand All @@ -1800,19 +1837,7 @@ internal bool GetEnvLightData(CommandBuffer cmd, HDCamera hdCamera, in Processed
}

atlasScaleOffset = scaleOffset;

var renderData = planarProbe.renderData;
var worldToCameraRHSMatrix = renderData.worldToCameraRHS;
var projectionMatrix = renderData.projectionMatrix;

// We don't need to provide the capture position
// It is already encoded in the 'worldToCameraRHSMatrix'
capturePosition = Vector3.zero;

// get the device dependent projection matrix
var gpuProj = GL.GetGPUProjectionMatrix(projectionMatrix, true);
var gpuView = worldToCameraRHSMatrix;
var vp = gpuProj * gpuView;

m_TextureCaches.env2DAtlasScaleOffset[fetchIndex] = scaleOffset;
m_TextureCaches.env2DCaptureVP[fetchIndex] = vp;

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
#pragma kernel FilterPlanarReflection
#pragma kernel DownScale
#pragma kernel DepthConversion

#pragma only_renderers d3d11 playstation xboxone vulkan metal switch
// #pragma enable_d3d11_debug_symbols

// The process is done in 3 steps. We start by converting the depth from oblique to regular frustum depth.
// Then we build a mip chain of both the depth and the color. The depth is averaged in 2x2 and the color
// is filtered in a wider neighborhood (otherwise we get too much artifacts) when doing the actual filtering.
// The filtering estimates the pixel footprint of the blur based on the distance to the occluder, the roughness
// of the current mip and the distance to the pixel. we then select the input from the right mip (the idea)
// Is to avoid a 128x128 blur for the rougher values.

// HDRP generic includes
#include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl"
#include "Packages/com.unity.render-pipelines.core/ShaderLibrary/GeometricTools.hlsl"
#include "Packages/com.unity.render-pipelines.core/ShaderLibrary/ImageBasedLighting.hlsl"
#include "Packages/com.unity.render-pipelines.high-definition/Runtime/ShaderLibrary/ShaderVariables.hlsl"
#include "Packages/com.unity.render-pipelines.high-definition/Runtime/Material/Material.hlsl"

// Tile size of this compute
#define PLANAR_REFLECTION_TILE_SIZE 8

// Mip chain of depth and color
TEXTURE2D(_DepthTextureMipChain);
TEXTURE2D(_ReflectionColorMipChain);

CBUFFER_START(ShaderVariablesPlanarReflectionFiltering)
// The screen size (width, height, 1.0 / width, 1.0 / height) that is produced by the capture
float4 _CaptureBaseScreenSize;
// The screen size (width, height, 1.0 / width, 1.0 / height) of the current level processed
float4 _CaptureCurrentScreenSize;
// Normal of the planar reflection plane
float3 _ReflectionPlaneNormal;
// World space position of the planar reflection (non camera relative)
float3 _ReflectionPlanePosition;
// FOV of the capture camera
float _CaptureCameraFOV;
// World space position of the capture camera (non camera relative)
float3 _CaptureCameraPositon;
// The mip index of the source data
uint _SourceMipIndex;
// Inverse view projection of the capture camera (oblique)
float4x4 _CaptureCameraIVP;
// Inverse view projection of the capture camera (non oblique)
float4x4 _CaptureCameraIVP_NO;
// View projection of the capture camera (non oblique)
float4x4 _CaptureCameraVP_NO;
// Given that sometimes our writing texture can be bigger than the current target, we need to apply a scale factor before using the sampling intrinsic
float _RTScaleFactor;
// Far plane of the capture camera
float _CaptureCameraFarPlane;
// The number of valid mips in the mip chain
uint _MaxMipLevels;
CBUFFER_END

// Output buffer of our filtering code
RW_TEXTURE2D(float4, _FilteredPlanarReflectionBuffer);

// These angles have been experimentally computed to match the result of reflection probes. Initially this was a table dependent on angle and roughness, but given that every planar has a
// finite number of LODs and those LODS have fixed roughness and the angle changes the result, but not that much. I changed it to a per LOD LUT
static const float reflectionProbeEquivalentAngles[UNITY_SPECCUBE_LOD_STEPS + 1] = {0.0, 0.04, 0.12, 0.4, 0.9, 1.2, 1.2};

[numthreads(PLANAR_REFLECTION_TILE_SIZE, PLANAR_REFLECTION_TILE_SIZE, 1)]
void FilterPlanarReflection(uint3 dispatchThreadId : SV_DispatchThreadID, uint2 groupThreadId : SV_GroupThreadID, uint2 groupId : SV_GroupID)
{
UNITY_XR_ASSIGN_VIEW_INDEX(dispatchThreadId.z);

// Compute the pixel position to process
uint2 currentCoord = (uint2)(groupId * PLANAR_REFLECTION_TILE_SIZE + groupThreadId);

// Compute the coordinates that shall be used for sampling
float2 sampleCoords = (currentCoord << (int)(_SourceMipIndex)) * _CaptureBaseScreenSize.zw * _RTScaleFactor;

// Fetch the depth value for the current pixel.
float centerDepthValue = SAMPLE_TEXTURE2D_LOD(_DepthTextureMipChain, s_trilinear_clamp_sampler, sampleCoords, _SourceMipIndex).x;

// Compute the world position of the tapped pixel
PositionInputs centralPosInput = GetPositionInput(currentCoord, _CaptureCurrentScreenSize.zw, centerDepthValue, _CaptureCameraIVP_NO, 0, 0);

// Compute the direction to the reflection pixel
const float3 rayDirection = normalize(centralPosInput.positionWS - _CaptureCameraPositon);

// Compute the position on the plane we shall be integrating from
float t = -1.0;
if (!IntersectRayPlane(_CaptureCameraPositon, rayDirection, _ReflectionPlanePosition, _ReflectionPlaneNormal, t))
{
// If there is no plane intersection, there is nothing to filter (means that is a position that cannot be reflected)
_FilteredPlanarReflectionBuffer[currentCoord] = float4(0.0, 0.0, 0.0, 1.0);
return;
}

// Compute the integration position (position on the plane)
const float3 integrationPositionRWS = _CaptureCameraPositon + rayDirection * t;

// Evaluate the cone halfangle for the filtering
const float halfAngle = reflectionProbeEquivalentAngles[_SourceMipIndex];

// Compute the distances we need for our filtering
const float distanceCameraToPlane = length(integrationPositionRWS - _CaptureCameraPositon);
const float distancePlaneToObject = length(centralPosInput.positionWS - integrationPositionRWS);

// Compute the cone footprint on the image reflection plane for this configuration
const float brdfConeRadius = tan(halfAngle) * distancePlaneToObject;

// We need to compute the view cone radius
const float viewConeRadius = brdfConeRadius * distanceCameraToPlane / (distancePlaneToObject + distanceCameraToPlane);

// Compute the view cone's half angle. This matches the FOV angle to see exactly the half of the cone (The tangent could be precomputed in the table)
const float viewConeHalfAngle = FastATanPos(viewConeRadius / distanceCameraToPlane);
// Given the camera's fov and pixel resolution convert the viewConeHalfAngle to a number of pixels
const float pixelDistance = viewConeHalfAngle / _CaptureCameraFOV * _CaptureCurrentScreenSize.x;

// Convert this to a mip level shift starting from the mip 0
const float miplevel = log2(pixelDistance / 2);

// Because of the high level of aliasing that this algorithm causes, especially on the higher mips, we apply a mip bias during the sampling to try to hide it
const float mipBias = _SourceMipIndex > 3 ? lerp(0.0, 2.0, (_MaxMipLevels - _SourceMipIndex) / _MaxMipLevels) : 0.0;

// Read the integration color that we should take
const float3 integrationColor = SAMPLE_TEXTURE2D_LOD(_ReflectionColorMipChain, s_trilinear_clamp_sampler, sampleCoords, clamp(miplevel + _SourceMipIndex + mipBias, 0, _MaxMipLevels)).xyz;

// Write the output ray data
_FilteredPlanarReflectionBuffer[currentCoord] = float4(integrationColor, 1.0);
}

// Half resolution output texture for our mip chain build.
RW_TEXTURE2D(float4, _HalfResReflectionBuffer);
RW_TEXTURE2D(float, _HalfResDepthBuffer);

[numthreads(PLANAR_REFLECTION_TILE_SIZE, PLANAR_REFLECTION_TILE_SIZE, 1)]
void DownScale(uint3 dispatchThreadId : SV_DispatchThreadID, uint2 groupThreadId : SV_GroupThreadID, uint2 groupId : SV_GroupID)
{
UNITY_XR_ASSIGN_VIEW_INDEX(dispatchThreadId.z);

// Compute the pixel position to process
int2 currentCoord = (int2)(groupId * PLANAR_REFLECTION_TILE_SIZE + groupThreadId);

// Unfortunately, we have to go wider than the simple 2x2 neighborhood or there is too much aliasing
float3 averageColor = 0.0;
float sumW = 0.0;
// In order to avoid a one pixel shift to the right, we need to center our down sample.
for (int y = -1; y <= 2; ++y)
{
for (int x = -1; x <= 2; ++x)
{
const int2 tapCoord = currentCoord * 2 + uint2(x, y);
// If the pixel is outside the current screen size, its weight becomes zero
float weight = tapCoord.x > _CaptureCurrentScreenSize.x || tapCoord.x < 0
|| tapCoord.y > _CaptureCurrentScreenSize.y || tapCoord.y < 0 ? 0.0 : 1.0;
averageColor += LOAD_TEXTURE2D_LOD(_ReflectionColorMipChain, tapCoord, _SourceMipIndex).xyz * weight;
sumW += weight;
}
}
// Normalize and output
_HalfResReflectionBuffer[currentCoord] = float4(averageColor / sumW, 1.0);

// We average the 4 depths and move on
_HalfResDepthBuffer[currentCoord] = (LOAD_TEXTURE2D_LOD(_DepthTextureMipChain, currentCoord * 2, _SourceMipIndex).x
+ LOAD_TEXTURE2D_LOD(_DepthTextureMipChain, currentCoord * 2 + uint2(0,1), _SourceMipIndex).x
+ LOAD_TEXTURE2D_LOD(_DepthTextureMipChain, currentCoord * 2 + uint2(1,0), _SourceMipIndex).x
+ LOAD_TEXTURE2D_LOD(_DepthTextureMipChain, currentCoord * 2 + uint2(1,1), _SourceMipIndex).x) * 0.25;
}

// Initial depth buffer (oblique)
TEXTURE2D(_DepthTextureOblique);
// Converted depth values (non oblique)
RW_TEXTURE2D(float, _DepthTextureNonOblique);

[numthreads(PLANAR_REFLECTION_TILE_SIZE, PLANAR_REFLECTION_TILE_SIZE, 1)]
void DepthConversion(uint3 dispatchThreadId : SV_DispatchThreadID, uint2 groupThreadId : SV_GroupThreadID, uint2 groupId : SV_GroupID)
{
UNITY_XR_ASSIGN_VIEW_INDEX(dispatchThreadId.z);

// Compute the pixel position to process
int2 currentCoord = (int2)(groupId * PLANAR_REFLECTION_TILE_SIZE + groupThreadId);

// Fetch the depth value for the current pixel. It would be great to use sample instead, but oblique matrices prevent us from doing it.
float centerDepthValue = LOAD_TEXTURE2D_LOD(_DepthTextureOblique, currentCoord, 0).x;

// Compute the world position of the tapped pixel
PositionInputs centralPosInput = GetPositionInput(currentCoord, _CaptureCurrentScreenSize.zw, centerDepthValue, _CaptureCameraIVP, 0, 0);

// For some reason, with oblique matrices, when the point is on the background the reconstructed position ends up behind the camera and at the wrong position
float3 rayDirection = normalize(_CaptureCameraPositon - centralPosInput.positionWS);
rayDirection = centerDepthValue == 0.0 ? -rayDirection : rayDirection;
// Adjust the position
centralPosInput.positionWS = centerDepthValue == 0.0 ? _CaptureCameraPositon + rayDirection * _CaptureCameraFarPlane : centralPosInput.positionWS;

// Re-do the projection, but this time without the oblique part and export it
float4 hClip = mul(_CaptureCameraVP_NO, float4(centralPosInput.positionWS, 1.0));
_DepthTextureNonOblique[currentCoord] = saturate(hClip.z / hClip.w);
}

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading