-
Notifications
You must be signed in to change notification settings - Fork 840
Initial implementation for the new planar reflection filtering #337
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
21 commits
Select commit
Hold shift + click to select a range
418c0e1
Initial implementation for the new planar reflection filtering
anisunity 6983a4c
Small improvement and "proper" support of oblique projection
anisunity 2e10eee
quality improvement to the filtering.
anisunity d002f11
Merge branch 'HDRP/staging' into HDRP/PlanarReflectionFilter
sebastienlagarde ef0dcf1
Update PlanarReflectionFiltering.compute
anisunity d586534
review corrections
anisunity af5f9fb
Merge branch 'HDRP/staging' into HDRP/PlanarReflectionFilter
sebastienlagarde e35672e
Merge branch 'HDRP/staging' into HDRP/PlanarReflectionFilter
sebastienlagarde ba941a4
Update planar filter for all material (was not replaced) + update scr…
sebastienlagarde 36b3f92
Update IBLFilterGGX.cs
sebastienlagarde 6acb8b8
Merge branch 'HDRP/staging' into HDRP/PlanarReflectionFilter
sebastienlagarde 0fe2845
fix shader warning on vulkan
sebastienlagarde 9bd5357
Merge branch 'HDRP/staging' into HDRP/PlanarReflectionFilter
sebastienlagarde b69db03
update references screenshots
sebastienlagarde 45c7ba5
Fixes for the plane normal and number of mips to be computed
anisunity 004ec94
Fix shift that was to the right in the blurred version
anisunity 4ba8c69
update references screenshots
sebastienlagarde ac2b0df
fix shader warning
sebastienlagarde 9b23459
Some cleanup
sebastienlagarde 3182935
change to fast Atan
sebastienlagarde 235a70c
Merge branch 'HDRP/staging' into HDRP/PlanarReflectionFilter
sebastienlagarde File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
4 changes: 2 additions & 2 deletions
4
...sts/Assets/ReferenceImages/Linear/LinuxEditor/Vulkan/None/2203_PlanarProbes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions
4
..._Tests/Assets/ReferenceImages/Linear/OSXEditor/Metal/None/2203_PlanarProbes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions
4
...sets/ReferenceImages/Linear/WindowsEditor/Direct3D11/None/2203_PlanarProbes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions
4
...ssets/ReferenceImages/Linear/WindowsEditor/Direct3D11/None/2501_LightLayers.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions
4
...sets/ReferenceImages/Linear/WindowsEditor/Direct3D12/None/2203_PlanarProbes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions
4
...s/Assets/ReferenceImages/Linear/WindowsEditor/Vulkan/None/2203_PlanarProbes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
194 changes: 194 additions & 0 deletions
194
...unity.render-pipelines.high-definition/Runtime/Lighting/PlanarReflectionFiltering.compute
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,194 @@ | ||
#pragma kernel FilterPlanarReflection | ||
#pragma kernel DownScale | ||
#pragma kernel DepthConversion | ||
|
||
#pragma only_renderers d3d11 playstation xboxone vulkan metal switch | ||
anisunity marked this conversation as resolved.
Show resolved
Hide resolved
|
||
// #pragma enable_d3d11_debug_symbols | ||
|
||
// The process is done in 3 steps. We start by converting the depth from oblique to regular frustum depth. | ||
// Then we build a mip chain of both the depth and the color. The depth is averaged in 2x2 and the color | ||
// is filtered in a wider neighborhood (otherwise we get too much artifacts) when doing the actual filtering. | ||
// The filtering estimates the pixel footprint of the blur based on the distance to the occluder, the roughness | ||
// of the current mip and the distance to the pixel. we then select the input from the right mip (the idea) | ||
// Is to avoid a 128x128 blur for the rougher values. | ||
|
||
// HDRP generic includes | ||
#include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl" | ||
#include "Packages/com.unity.render-pipelines.core/ShaderLibrary/GeometricTools.hlsl" | ||
#include "Packages/com.unity.render-pipelines.core/ShaderLibrary/ImageBasedLighting.hlsl" | ||
#include "Packages/com.unity.render-pipelines.high-definition/Runtime/ShaderLibrary/ShaderVariables.hlsl" | ||
#include "Packages/com.unity.render-pipelines.high-definition/Runtime/Material/Material.hlsl" | ||
|
||
// Tile size of this compute | ||
#define PLANAR_REFLECTION_TILE_SIZE 8 | ||
|
||
// Mip chain of depth and color | ||
TEXTURE2D(_DepthTextureMipChain); | ||
TEXTURE2D(_ReflectionColorMipChain); | ||
|
||
CBUFFER_START(ShaderVariablesPlanarReflectionFiltering) | ||
// The screen size (width, height, 1.0 / width, 1.0 / height) that is produced by the capture | ||
float4 _CaptureBaseScreenSize; | ||
// The screen size (width, height, 1.0 / width, 1.0 / height) of the current level processed | ||
float4 _CaptureCurrentScreenSize; | ||
// Normal of the planar reflection plane | ||
float3 _ReflectionPlaneNormal; | ||
// World space position of the planar reflection (non camera relative) | ||
float3 _ReflectionPlanePosition; | ||
// FOV of the capture camera | ||
float _CaptureCameraFOV; | ||
// World space position of the capture camera (non camera relative) | ||
float3 _CaptureCameraPositon; | ||
// The mip index of the source data | ||
uint _SourceMipIndex; | ||
// Inverse view projection of the capture camera (oblique) | ||
float4x4 _CaptureCameraIVP; | ||
// Inverse view projection of the capture camera (non oblique) | ||
float4x4 _CaptureCameraIVP_NO; | ||
// View projection of the capture camera (non oblique) | ||
float4x4 _CaptureCameraVP_NO; | ||
// Given that sometimes our writing texture can be bigger than the current target, we need to apply a scale factor before using the sampling intrinsic | ||
float _RTScaleFactor; | ||
// Far plane of the capture camera | ||
float _CaptureCameraFarPlane; | ||
// The number of valid mips in the mip chain | ||
uint _MaxMipLevels; | ||
CBUFFER_END | ||
|
||
// Output buffer of our filtering code | ||
RW_TEXTURE2D(float4, _FilteredPlanarReflectionBuffer); | ||
|
||
// These angles have been experimentally computed to match the result of reflection probes. Initially this was a table dependent on angle and roughness, but given that every planar has a | ||
// finite number of LODs and those LODS have fixed roughness and the angle changes the result, but not that much. I changed it to a per LOD LUT | ||
static const float reflectionProbeEquivalentAngles[UNITY_SPECCUBE_LOD_STEPS + 1] = {0.0, 0.04, 0.12, 0.4, 0.9, 1.2, 1.2}; | ||
|
||
[numthreads(PLANAR_REFLECTION_TILE_SIZE, PLANAR_REFLECTION_TILE_SIZE, 1)] | ||
void FilterPlanarReflection(uint3 dispatchThreadId : SV_DispatchThreadID, uint2 groupThreadId : SV_GroupThreadID, uint2 groupId : SV_GroupID) | ||
{ | ||
UNITY_XR_ASSIGN_VIEW_INDEX(dispatchThreadId.z); | ||
|
||
// Compute the pixel position to process | ||
uint2 currentCoord = (uint2)(groupId * PLANAR_REFLECTION_TILE_SIZE + groupThreadId); | ||
|
||
// Compute the coordinates that shall be used for sampling | ||
float2 sampleCoords = (currentCoord << (int)(_SourceMipIndex)) * _CaptureBaseScreenSize.zw * _RTScaleFactor; | ||
|
||
// Fetch the depth value for the current pixel. | ||
float centerDepthValue = SAMPLE_TEXTURE2D_LOD(_DepthTextureMipChain, s_trilinear_clamp_sampler, sampleCoords, _SourceMipIndex).x; | ||
|
||
// Compute the world position of the tapped pixel | ||
PositionInputs centralPosInput = GetPositionInput(currentCoord, _CaptureCurrentScreenSize.zw, centerDepthValue, _CaptureCameraIVP_NO, 0, 0); | ||
|
||
// Compute the direction to the reflection pixel | ||
const float3 rayDirection = normalize(centralPosInput.positionWS - _CaptureCameraPositon); | ||
|
||
// Compute the position on the plane we shall be integrating from | ||
float t = -1.0; | ||
if (!IntersectRayPlane(_CaptureCameraPositon, rayDirection, _ReflectionPlanePosition, _ReflectionPlaneNormal, t)) | ||
{ | ||
// If there is no plane intersection, there is nothing to filter (means that is a position that cannot be reflected) | ||
_FilteredPlanarReflectionBuffer[currentCoord] = float4(0.0, 0.0, 0.0, 1.0); | ||
return; | ||
} | ||
|
||
// Compute the integration position (position on the plane) | ||
const float3 integrationPositionRWS = _CaptureCameraPositon + rayDirection * t; | ||
|
||
// Evaluate the cone halfangle for the filtering | ||
const float halfAngle = reflectionProbeEquivalentAngles[_SourceMipIndex]; | ||
|
||
// Compute the distances we need for our filtering | ||
const float distanceCameraToPlane = length(integrationPositionRWS - _CaptureCameraPositon); | ||
const float distancePlaneToObject = length(centralPosInput.positionWS - integrationPositionRWS); | ||
|
||
// Compute the cone footprint on the image reflection plane for this configuration | ||
const float brdfConeRadius = tan(halfAngle) * distancePlaneToObject; | ||
|
||
// We need to compute the view cone radius | ||
const float viewConeRadius = brdfConeRadius * distanceCameraToPlane / (distancePlaneToObject + distanceCameraToPlane); | ||
|
||
// Compute the view cone's half angle. This matches the FOV angle to see exactly the half of the cone (The tangent could be precomputed in the table) | ||
const float viewConeHalfAngle = FastATanPos(viewConeRadius / distanceCameraToPlane); | ||
// Given the camera's fov and pixel resolution convert the viewConeHalfAngle to a number of pixels | ||
const float pixelDistance = viewConeHalfAngle / _CaptureCameraFOV * _CaptureCurrentScreenSize.x; | ||
|
||
// Convert this to a mip level shift starting from the mip 0 | ||
const float miplevel = log2(pixelDistance / 2); | ||
|
||
// Because of the high level of aliasing that this algorithm causes, especially on the higher mips, we apply a mip bias during the sampling to try to hide it | ||
const float mipBias = _SourceMipIndex > 3 ? lerp(0.0, 2.0, (_MaxMipLevels - _SourceMipIndex) / _MaxMipLevels) : 0.0; | ||
|
||
// Read the integration color that we should take | ||
const float3 integrationColor = SAMPLE_TEXTURE2D_LOD(_ReflectionColorMipChain, s_trilinear_clamp_sampler, sampleCoords, clamp(miplevel + _SourceMipIndex + mipBias, 0, _MaxMipLevels)).xyz; | ||
|
||
// Write the output ray data | ||
_FilteredPlanarReflectionBuffer[currentCoord] = float4(integrationColor, 1.0); | ||
} | ||
|
||
// Half resolution output texture for our mip chain build. | ||
RW_TEXTURE2D(float4, _HalfResReflectionBuffer); | ||
RW_TEXTURE2D(float, _HalfResDepthBuffer); | ||
|
||
[numthreads(PLANAR_REFLECTION_TILE_SIZE, PLANAR_REFLECTION_TILE_SIZE, 1)] | ||
void DownScale(uint3 dispatchThreadId : SV_DispatchThreadID, uint2 groupThreadId : SV_GroupThreadID, uint2 groupId : SV_GroupID) | ||
{ | ||
UNITY_XR_ASSIGN_VIEW_INDEX(dispatchThreadId.z); | ||
|
||
// Compute the pixel position to process | ||
int2 currentCoord = (int2)(groupId * PLANAR_REFLECTION_TILE_SIZE + groupThreadId); | ||
|
||
// Unfortunately, we have to go wider than the simple 2x2 neighborhood or there is too much aliasing | ||
float3 averageColor = 0.0; | ||
float sumW = 0.0; | ||
// In order to avoid a one pixel shift to the right, we need to center our down sample. | ||
for (int y = -1; y <= 2; ++y) | ||
{ | ||
for (int x = -1; x <= 2; ++x) | ||
{ | ||
const int2 tapCoord = currentCoord * 2 + uint2(x, y); | ||
// If the pixel is outside the current screen size, its weight becomes zero | ||
float weight = tapCoord.x > _CaptureCurrentScreenSize.x || tapCoord.x < 0 | ||
|| tapCoord.y > _CaptureCurrentScreenSize.y || tapCoord.y < 0 ? 0.0 : 1.0; | ||
averageColor += LOAD_TEXTURE2D_LOD(_ReflectionColorMipChain, tapCoord, _SourceMipIndex).xyz * weight; | ||
sumW += weight; | ||
} | ||
} | ||
// Normalize and output | ||
_HalfResReflectionBuffer[currentCoord] = float4(averageColor / sumW, 1.0); | ||
|
||
// We average the 4 depths and move on | ||
_HalfResDepthBuffer[currentCoord] = (LOAD_TEXTURE2D_LOD(_DepthTextureMipChain, currentCoord * 2, _SourceMipIndex).x | ||
+ LOAD_TEXTURE2D_LOD(_DepthTextureMipChain, currentCoord * 2 + uint2(0,1), _SourceMipIndex).x | ||
+ LOAD_TEXTURE2D_LOD(_DepthTextureMipChain, currentCoord * 2 + uint2(1,0), _SourceMipIndex).x | ||
+ LOAD_TEXTURE2D_LOD(_DepthTextureMipChain, currentCoord * 2 + uint2(1,1), _SourceMipIndex).x) * 0.25; | ||
} | ||
|
||
// Initial depth buffer (oblique) | ||
TEXTURE2D(_DepthTextureOblique); | ||
// Converted depth values (non oblique) | ||
RW_TEXTURE2D(float, _DepthTextureNonOblique); | ||
|
||
[numthreads(PLANAR_REFLECTION_TILE_SIZE, PLANAR_REFLECTION_TILE_SIZE, 1)] | ||
void DepthConversion(uint3 dispatchThreadId : SV_DispatchThreadID, uint2 groupThreadId : SV_GroupThreadID, uint2 groupId : SV_GroupID) | ||
{ | ||
UNITY_XR_ASSIGN_VIEW_INDEX(dispatchThreadId.z); | ||
|
||
// Compute the pixel position to process | ||
int2 currentCoord = (int2)(groupId * PLANAR_REFLECTION_TILE_SIZE + groupThreadId); | ||
|
||
// Fetch the depth value for the current pixel. It would be great to use sample instead, but oblique matrices prevent us from doing it. | ||
float centerDepthValue = LOAD_TEXTURE2D_LOD(_DepthTextureOblique, currentCoord, 0).x; | ||
|
||
// Compute the world position of the tapped pixel | ||
PositionInputs centralPosInput = GetPositionInput(currentCoord, _CaptureCurrentScreenSize.zw, centerDepthValue, _CaptureCameraIVP, 0, 0); | ||
|
||
// For some reason, with oblique matrices, when the point is on the background the reconstructed position ends up behind the camera and at the wrong position | ||
float3 rayDirection = normalize(_CaptureCameraPositon - centralPosInput.positionWS); | ||
rayDirection = centerDepthValue == 0.0 ? -rayDirection : rayDirection; | ||
// Adjust the position | ||
centralPosInput.positionWS = centerDepthValue == 0.0 ? _CaptureCameraPositon + rayDirection * _CaptureCameraFarPlane : centralPosInput.positionWS; | ||
|
||
// Re-do the projection, but this time without the oblique part and export it | ||
float4 hClip = mul(_CaptureCameraVP_NO, float4(centralPosInput.positionWS, 1.0)); | ||
_DepthTextureNonOblique[currentCoord] = saturate(hClip.z / hClip.w); | ||
} |
8 changes: 8 additions & 0 deletions
8
....render-pipelines.high-definition/Runtime/Lighting/PlanarReflectionFiltering.compute.meta
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.