Skip to content

Rendering Pipeline Improvements #82

@vanruesc

Description

@vanruesc

Introduction

The purpose of the postprocessing library is to provide a package of well maintained and up-to-date filter effects for three.js. While there are already many improvements in place, the current code base of this library still roughly mirrors the postprocessing examples from three.js and operates according to the following principles:

  • The EffectComposer maintains a list of passes.
  • The user adds various kinds of passes that modify the rendered scene colors consecutively.
  • Any pass may render to screen.

This processing flow is quite simple and easy to use, but it's not very efficient.

Pipeline Problem

The postprocessing library currently operates in a wasteful manner when multiple passes are being used. The biggest issue that prevents optimal performance is that almost every pass performs at least one expensive fullscreen render operation. Using a variety of interesting filter effects comes at a price that doesn't justify the results.

For the next major release v5.0.0 I'd like to replace the current naive approach with a more sophisticated one by introducing the concept of effects to improve performance and tighten some loose ends.

Merging Passes

The goal of most passes is to modify the input pixel data in some way. Instead of expecting each pass to write its result to a frame buffer just for the next pass to read, modify and write more pixels again, all of the work could be done in one go using a single fullscreen render operation. With the aim to improve performance, some passes already consist of multiple effects. When a pass is designed that way, it only needs to perform a single render operation, but it also becomes bloated and rigid. In terms of maintainability, cramming all the existing shaders into one is not exactly an option.

Still, there should be a mechanism that merges passes to minimize the amount of render operations globally.

Of course, there's more than one way to implement this feature, but passes should be merged whenever possible. This means that the user shouldn't be able to accidentally degrade performance. With this in mind, the merge process could either be automatic or explicit. In my opinion, it should be explicit to allow for more control. The convenience of a fully automatic merge mechanism is questionable because it could easily become unpredictable and difficult to handle due to its implicit nature. It goes without saying that the merge process should be as simple as possible and crystal clear.

EffectPass

At a closer look, passes can be divided into four groups. The first group consists of passes that render normal scenes like the RenderPass and MaskPass. The second type doesn't render anything, but performs supporting operations. For example, the ClearMaskPass belongs to that group. Passes that render special textures make up the third group. GPGPU passes are a good example for this group. The fourth and most prominent group contains the fullscreen effect passes. Only this last group of passes can be merged.

To devise a clean and efficient merge strategy, be it automatic or manual, these effect passes must be treated as a special case. Such passes actually contain effects and act only as carriers that eventually render them. Merging would only focus on the effects within the passes. Hence it makes sense to separate effects from passes. To further enhance the concept of effects, there should be only one pass that can handle effects.

In other words, there should be a monolithic EffectPass that is similar to the RenderPass. While the RenderPass can render any Scene, the EffectPass would be able to render any Effect and it would take care of merging multiple effects. This not only improves performance substantially but also reduces boilerplate code and guarantees an optimal processing order. For example, cinematic noise should not be applied before antialiasing. This approach is explicit in that the user adds effects to a specialized pass and expects it to organize them efficiently. It's also automatic in that the merging process itself remains hidden.

Note that once a merge mechanism is in place, it will no longer be necessary to maintain bloated shaders. Complex passes can then safely be split into more fine-grained effects without losing the performance benefits of compound shaders.

Effects

The new Effect class will be very similar to the Pass class. An Effect must specify a fragment shader and a vertex shader according to a simple standard that will closely follow the one that Shadertoy uses. It may also declare custom defines and uniforms. Just like passes, effects may perform initialization tasks, react to render size changes and execute supporting render operations if needed but they don't have access to the output buffer and are not supposed to render to screen by themselves. Instead of a render method, they would have something like an onBeforeRender method.

Consequences

Since the shaders from the chosen effects will be baked into a single shader, enabling or disabling one effect at runtime would require a slow recompilation of the shader program. As an alternative, the user can prepare multiple EffectPass instances with the desired effect combinations and enable or disable these passes as needed. This is also a reason why a purely automatic merge system would cause trouble.

Besides the performance and coordination advantages, this new approach would allow every effect to choose its own blend mode from a list of built-in functions. An option to disable blending altogether would also be available to handle advanced compositing use cases in a very natural fashion.

Another interesting feature would be inter-effect-communication. Since effects will ultimately reside in the same shader program, they could easily declare output variables for other effects. For example, a chromatic abberation effect could optionally be influenced by a preceding bokeh effect.

One restriction that rarely becomes an issue is the limit on the amount of uniforms that can be used in a single shader program. With the monolithic EffectPass approach, this limit may be reached more quickly than usual. If that happens the EffectPass will inform you about it and you'll either have to reduce the number of effects or use multiple EffectPass instances. Most effects don't require additional varying fields so there shouldn't be an issue with that. However, some effects may use multiple varyings for UV-coordinates to optimize texel fetches. This case will be treated similarly and limitations will be reported to the user accordingly.

Conclusion

At this point, it's hard to tell whether the presented changes can live up to the expectations. I did consider other options, but none of them looked as promising as the EffectPass approach.

I'll try it out first to see if there are any hidden problems. In the meantime, feel free to share your thoughts.

✌️

Metadata

Metadata

Assignees

Labels

discussionAn open discussion or announcementenhancementEnhancement of existing functionality

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions