You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Update Light Anchor page (#6955)
* Updated Light Anchor to match Core RP page
* added light anchor page to ToC
* Update light-anchor.md
* Update Environment Lighting doc (#7008)
The Static Lighting Volumetric Clouds option is missing from the Environment Lighting page. In this PR, I add information about that option. I also added a note about the impact of the dominant directional light on the scene based on a tooltip Julien Ignace added to the UI in #6059. Based on official docs guidance regarding maintainability (https://unity-docs.gitbook.io/style-guide/format/images/screenshots), I have also removed the screenshot of the menu.
* Disable main camera in compositor scene (#7038)
* Fixed various issues with render graph viewer when entering playmode. #7040
* Fix for Final Image Histogram debug mode not working on some hardware #7041
* Fixed HDRP camera debug panel rendering foldout. (#7043)
* ** Whats new light loop information ** (#7047)
* Grammar
* Formatting
* Correct version
* [HDRP] Update what's new and upgrade guide about decals and custom passes (#7052)
* update what's new and upgrade guide
* move upgrade guide
* Update Upgrading-from-2021.1-to-2021.2.md
* Update Upgrading-from-2022.1-to-2022.2.md
Co-authored-by: sebastienlagarde <sebastien@unity3d.com>
* removed duplicated sentence (#6999)
* removed italics, fixed notes (#7021)
* removed italics, fixed notes
* Apply formatting changes
Co-authored-by: noreply@unity3d.com <noreply@unity3d.com>
* recommended import settings (#7023)
* api formatting (#7027)
* Updated Light Anchor page in line with updated HDRP page (#6956)
* Updated Light Anchor page in line with updated HDRP page
* Update View-Lighting-Tool.md
* Update View-Lighting-Tool.md
* Update View-Lighting-Tool.md
* Update View-Lighting-Tool.md
* Update View-Lighting-Tool.md
* Update View-Lighting-Tool.md
* Apply formatting changes
Co-authored-by: noreply@unity3d.com <noreply@unity3d.com>
Co-authored-by: emilybrown1 <88374601+emilybrown1@users.noreply.github.com>
Co-authored-by: Val Grimm <72067840+ValGrimm-U3D@users.noreply.github.com>
Co-authored-by: TomasKiniulis <50582134+TomasKiniulis@users.noreply.github.com>
Co-authored-by: JulienIgnace-Unity <julien@unity3d.com>
Co-authored-by: FrancescoC-unity <43168857+FrancescoC-unity@users.noreply.github.com>
Co-authored-by: Kleber Garcia <kleber.garcia@unity3d.com>
Co-authored-by: Antoine Lelievre <antoinel@unity3d.com>
Co-authored-by: noreply@unity3d.com <noreply@unity3d.com>
Copy file name to clipboardExpand all lines: com.unity.render-pipelines.core/Documentation~/View-Lighting-Tool.md
+27-3Lines changed: 27 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,14 +4,38 @@ The Light Anchor can help to place light sources around subjects, in relation to
4
4
5
5
## Using the Light Anchor Component
6
6
7
-
To use the Light Anchor, you must set the Tag of at least one Camera to "MainCamera". By default, the Anchor's position will be the same as the position of the GameObject the Light Anchor Component is attached to.
7
+
To add a Light Anchor component to a GameObject in your Scene:
8
+
9
+
1. Select a Light GameObject in the hierarchy to open its Inspector window.
10
+
2. Go to **Add Component** > **Rendering** > **Light Anchor**
11
+
12
+
By default, the Anchor's position is the same as the position of the GameObject the Light Anchor Component is attached to.
13
+
14
+
**Note**: To use the Light Anchor, you must set the Tag of at least one Camera to "MainCamera".
8
15
9
16
Use the **Orbit** and **Elevation** to control the orientation of the light, in degrees, relative to the main Camera's and Anchor's positions. If the Light has a Cookie or an IES Profile, use the **Roll** to change their orientation. Use the **Distance** to control how far from the anchor, in meters, you want to place the Light.
10
17
11
-
Using the **Anchor Position Override**, you can provide a custom GameObject as an anchor point for the light. This is useful if you want the light to follow a specific GameObject in the Scene.
18
+
You can use the **Anchor Position Override** to provide a GameObject’s [Transform](https://docs.unity3d.com/ScriptReference/Transform.html) as an anchor point for the Light. This is useful if you want the Light to follow a specific GameObject in the Scene.
19
+
20
+

21
+
22
+
**Note**: The above example uses the Main Camera as the reference Camera that adjusts the light rotation. The Common presets might create a different result in the Scene View if your view isn't aligned with the Main Camera.
12
23
13
24
You can set a **Position Offset** for this custom Anchor. This is useful if the Transform position of the custom Anchor isn't centered appropriately for the light to orbit correctly around the custom Anchor.
14
25
15
-

26
+

27
+
16
28
17
29
The Light Anchor component also includes a list of **Presets** that you can use to set the Light's orientation relative to the main Camera.
|**Orbit**| Use the left icon to control the Orbit of the light. This tool becomes green when you move the icon. |
36
+
|**Elevation**| Use the middle icon to control the Elevation of the light. This tool becomes blue when you move the icon. |
37
+
|**Roll**| Use the right icon to control the Roll of the light. This tool becomes gray when you move the icon. This is useful if the light has an IES or a Cookie. |
38
+
|**Distance**| Controls the distance between the light and its anchor in world space. |
39
+
|**Up Direction**| Defines the space of the up direction of the anchor. When you set this value to Local, the Up Direction is relative to the Camera. |
40
+
|**Anchor Position Override**| Allows you to use a GameObject's [Transform](https://docs.unity3d.com/ScriptReference/Transform.html) as anchor position instead of the LightAnchor's Transform. When the Transform of the GameObject you assigned to this property changes, the Light Anchor's Transform also changes. |
41
+
|**Common**| Assigns a preset to the light component based on the behavior of studio lights. |
Copy file name to clipboardExpand all lines: com.unity.render-pipelines.core/Documentation~/whats-new-12.md
+14Lines changed: 14 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,3 +13,17 @@ In practice, this means that the initialization APIs no longer require MSAA rela
13
13
### New API to disable runtime Rendering Debugger UI
14
14
15
15
It is now possible to disable the Rendering Debugger UI at runtime by using [DebugManager.enableRuntimeUI](https://docs.unity3d.com/Packages/com.unity.render-pipelines.core@latest/api/UnityEngine.Rendering.DebugManager.html#UnityEngine_Rendering_DebugManager_enableRuntimeUI).
16
+
17
+
## Added
18
+
19
+
### High performance sorting algorithms in CoreUnsafeUtils
20
+
21
+
New high performance sorting algorithms in the CoreUnsafeUtils helper methods. The new sorting algorithms include:
22
+
23
+
* RadixSort - ideal for very large lists, more then 512 elements.
24
+
* MergeSort (non recursive) - ideal for mid size lists, less than 512 elements.
25
+
* InsertionSort - ideal for very small lists, less than 32 elements.
26
+
27
+
The sorting algorithms only work on uint elements. They include methods that support standard c# arrays, NativeArray objects or raw pointers.
28
+
RadixSort and MergeSort require support array data, which can be allocated by the user, or allocated automatically via ref parameter passing. InsertionSort is in-place and does not require support data.
29
+
These algorithms are compatible with burst kernels when using raw pointers or NativeArray. Currently HDRP utilizes them to sort lights in the CPU lightloop.
Copy file name to clipboardExpand all lines: com.unity.render-pipelines.high-definition/CHANGELOG.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,6 +45,9 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
45
45
- Fixed performance penalty when hardware DRS was used between multiple views like editor or other gameviews (case 1354382)
46
46
- Fixed PBR Dof using the wrong resolution for COC min/max filter, and also using the wrong parameters when running post TAAU stabilization. (case 1388961)
47
47
- Fixed RTGI potentially reading from outside the half res pixels due to missing last pixel during the upscale pass (case 1400310).
48
+
- Fixed various issues with render graph viewer when entering playmode.
49
+
- Fixed issue with Final Image Histogram displaying a flat histogram on certain GPUs and APIs.
50
+
- Fixed HDRP camera debug panel rendering foldout.
48
51
49
52
### Changed
50
53
- Disabled the "Reflect Sky" feature in the case of transparent screen space reflections for the water system.
Copy file name to clipboardExpand all lines: com.unity.render-pipelines.high-definition/Documentation~/Accumulation.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,20 +1,20 @@
1
1
## Multiframe rendering and accumulation
2
2
3
-
Some rendering techniques, such as [path tracing](Ray-Tracing-Path-Tracing.md) and accumulation motion blur, combine information from multiple intermediate sub-frames to create a final "converged" frame. Each intermediate sub-frame can correspond to a slightly different point in time, effectively computing physically-based accumulation motion blur, which properly takes into account object rotations, deformations, material or lighting changes.
3
+
Some rendering techniques, such as [path tracing](Ray-Tracing-Path-Tracing.md) and accumulation motion blur, combine information from multiple intermediate sub-frames to create a final "converged" frame. Each intermediate sub-frame can correspond to a slightly different point in time, effectively computing physicallybased accumulation motion blur, which properly takes into account object rotations, deformations, material or lighting changes.
4
4
5
5
The High Definition Render Pipeline (HDRP) provides a scripting API that allows you to control the creation of sub-frames and the convergence of multi-frame rendering effects. In particular, the API allows you to control the number of intermediate sub-frames (samples) and the points in time that correspond to each one of them. Furthermore, you can use a shutter profile to control the weights of each sub-frame. A shutter profile describes how fast the physical camera opens and closes its shutter.
6
6
7
-
This API is particularly useful when recording path-traced movies. Normally, when editing a Scene, the convergence of path tracing restarts every time the Scene changes, to provide artists an interactive editing workflow that allows them to quickly visualize their changes. However such behavior is not desirable during recording.
7
+
This API is particularly useful when recording path-traced movies. Normally, when editing a Scene, the convergence of path tracing restarts every time the Scene changes, to provide artists an interactive editing workflow that allows them to quickly visualize their changes. However such behavior isn't desirable during recording.
8
8
9
9
The following image shows a rotating GameObject with path tracing and accumulation motion blur, recorded using the multi-frame recording API.
10
10
11
11

12
12
13
13
## API overview
14
14
The recording API is available in HDRP and has three calls:
15
-
-**BeginRecording**: Call this when you want to start a multi-frame render.
16
-
-**PrepareNewSubFrame**: Call this before rendering a new subframe.
17
-
-**EndRecording**: Call this when you want to stop the multi-frame render.
15
+
-`BeginRecording`: Call this when you want to start a multi-frame render.
16
+
-`PrepareNewSubFrame`: Call this before rendering a new subframe.
17
+
-`EndRecording`: Call this when you want to stop the multi-frame render.
18
18
19
19
The only call that takes any parameters is **BeginRecording**. Here is an explanation of the parameters:
20
20
@@ -27,7 +27,7 @@ The only call that takes any parameters is **BeginRecording**. Here is an explan
27
27
Before calling the accumulation API, the application should also set the desired Time.captureDeltaTime. The example script below demonstrates how to use these API calls.
28
28
29
29
## Scripting API example
30
-
The following example demonstrates how to use the multi-frame rendering API in your scripts to properly record converged animation sequences with path tracing and/or accumulation motion blur. To use it, attach the script to a Camera in your Scene and, in the component's context menu, click the “Start Recording” and “Stop Recording” actions.
30
+
The following example demonstrates how to use the multi-frame rendering API in your scripts to properly record converged animation sequences with path tracing or accumulation motion blur. To use it, attach the script to a Camera in your Scene and, in the component's context menu, click the “Start Recording” and “Stop Recording” actions.
31
31
32
32
```C#
33
33
usingUnityEngine;
@@ -123,7 +123,7 @@ In all cases, the speed of the sphere is the same. The only change is the shutte
123
123
124
124
You can easily define the first three profiles without using an animation curve by setting the open, close parameters to (0,1), (1,1), and (0.25, 0.75) respectively. The last profile requires the use of an animation curve.
125
125
126
-
In this example, you can see that the slow open profile creates a motion trail appearance for the motion blur, which might be more desired for artists. On the other hand, the smooth open and close profile creates smoother animations than the slow open or uniform profiles.
126
+
In this example, you can see that the slow open profile creates a motion trail appearance for the motion blur, which might be more desired for artists. Although, the smooth open and close profile creates smoother animations than the slow open or uniform profiles.
127
127
128
128
129
129
## High Quality Anti-aliasing with Accumulation
@@ -243,5 +243,5 @@ public class SuperSampling : MonoBehaviour
243
243
244
244
## Limitations
245
245
The multi-frame rendering API internally changes the `Time.timeScale` of the Scene. This means that:
246
-
- You cannot have different accumulation motion blur parameters per camera.
247
-
- Projects that already modify this parameter per frame are not be compatible with this feature.
246
+
- You can't have different accumulation motion blur parameters per camera.
247
+
- Projects that already modify this parameter per frame aren't be compatible with this feature.
To maximize performance and minimize bandwidth usage, HDRP by default renders image frames in the **R11G11B10** format. However, this format does not include an alpha channel, which might be required for applications that want to composite HDRP's output over other images.
3
+
To maximize performance and minimize bandwidth usage, HDRP by default renders image frames in the **R11G11B10** format. However, this format doesn't include an alpha channel, which might be required for applications that want to composite HDRP's output over other images.
4
4
5
-
To configure HDRP to output an alpha channel, open your [HDRP Asset](HDRP-Asset.md) (menu: **Edit > Project Settings > Graphics > Scriptable Render Pipeline Asset**)), go to the **Rendering** section, and set the **Color Buffer Format** to **R16G16B16A16**. However, note that enabling this option incurs a performance overhead. In HDRP, opaque materials always output 1 in the alpha channel, unless you enable [Alpha Clipping](Alpha-Clipping.md). If you want to export the alpha of an opaque material, one solution is to enable **Alpha Clipping** and set the Threshold to 0.
5
+
To configure HDRP to output an alpha channel:
6
6
7
-
Furthermore, when post-processing is enabled, the *Buffer Format* for post-processing operations should also be set to *R16G16B16A16* in order to apply post-processing operation in the alpha channel. This can be selected from the post-processing section of the HDRP asset. If the post-processing format is set to **R11G11B10**, then HDRP will output a copy of the alpha channel without any post-processing on it.
7
+
1. Open your [HDRP Asset](HDRP-Asset.md) in the Inspector window.
8
+
2. Go to **Rendering** > **Color Buffer Format**.
9
+
3. Select **R16G16B16A16**.
10
+
11
+
**Note**: Enabling this option incurs a performance overhead.
12
+
13
+
In HDRP, opaque materials always output 1 in the alpha channel, unless you enable [Alpha Clipping](Alpha-Clipping.md). If you want to export the alpha of an opaque material, one solution is to enable **Alpha Clipping** and set the Threshold to 0.
14
+
15
+
Furthermore, when you enable post-processing, also set the Buffer Format for post-processing operations to **R16G16B16A16** to apply post-processing operation in the alpha channel. You can select this from the **Post-Processing** section of the HDRP asset. If you set the post-processing format to **R11G11B10**, HDRP outputs a copy of the alpha channel without any post-processing on it.
8
16
9
17
The following table summarizes the behavior of HDRP regarding the alpha channel of the output frames.
10
18
@@ -14,27 +22,27 @@ Rendering Buffer Format | Post-processing Buffer Format | Alpha Output
14
22
**R16G16B16A16** | **R11G11B10** | Alpha channel without post-processing (AlphaCopy)
15
23
**R16G16B16A16** | **R16G16B16A16** | Alpha channel with post-processing
16
24
17
-
Note that alpha output is also supported in [Path Tracing](Ray-Tracing-Path-Tracing.md).
25
+
**Note**: Alpha output is also supported in [Path Tracing](Ray-Tracing-Path-Tracing.md).
18
26
19
27
## DoF and Alpha Output
20
-
Another case which might require post-processing of the alpha channel is for scenes that use Depth Of Field. In this case, if the alpha is not processed, compositing will result in a sharp cut-off of an object that should appear blurred. This is better is illustrated in the images below:
28
+
Another case which might require post-processing of the alpha channel is for scenes that use Depth Of Field. In this case, if the alpha isn't processed, compositing results in a sharp cut-off of an object that appears blurred. This is better illustrated in the images below:
21
29
22
30

23
31
24
-
An out-of-focus sphere composited over a solid blue background using a *R16G16B16A16* buffer format for both rendering and post-processing. In this case, DoF is applied in the alpha channel, resulting in a proper composition (the output alpha used in the composition is shown in the image inset).
32
+
An out-of-focus sphere composited over a solid blue background using a R16G16B16A16 buffer format for both rendering and post-processing. In this case, DoF is applied in the alpha channel, resulting in a proper composition (the output alpha used in the composition is shown in the image inset).
25
33
26
34

27
35
28
-
An out-of-focus sphere composited over a solid blue background using *AlphaCopy*. In this case, DoF is NOT applied in the alpha channel, resulting in a sharp outline around the composited sphere (the output alpha used in the composition is shown in the image inset).
36
+
An out-of-focus sphere composited over a solid blue background using AlphaCopy. In this case, DoF isn't applied in the alpha channel, resulting in a sharp outline around the composited sphere (the output alpha used in the composition is shown in the image inset).
29
37
30
38
## Temporal Anti-Aliasing and Alpha Output
31
-
When Temporal Anti-Aliasing (TAA) is enabled it is highly recommended to enable post-processing for the alpha channel (*R16G16B16A16* format for both rendering and post-processing). If the alpha channel is not post-processed, then the alpha mask will be jittered, as shown in the images below:
39
+
When you enable Temporal Anti-Aliasing (TAA), it's highly recommended to enable post-processing for the alpha channel (R16G16B16A16 format for both rendering and post-processing). If the alpha channel isn't post-processed, then the alpha mask is jittered, as shown in the images below:
32
40
33
41

34
42
35
-
A sphere rendered with TAA using *AlphaCopy*, composited over a solid blue background using the alpha channel. The alpha channel is not temporally stabilized by TAA, resulting in jittering on the final image.
43
+
A sphere rendered with TAA using AlphaCopy, composited over a solid blue background using the alpha channel. The alpha channel isn't temporally stabilized by TAA, resulting in jittering on the final image.
36
44
37
45
38
46

39
47
40
-
A sphere rendered with TAA (*R16G16B16A16* for both rendering and post-processing), composited over a solid blue background using the alpha channel. TAA is also applied in the alpha channel, resulting in a stable composition.
48
+
A sphere rendered with TAA (R16G16B16A16 for both rendering and post-processing), composited over a solid blue background using the alpha channel. TAA is also applied in the alpha channel, resulting in a stable composition.
0 commit comments