How to Render Multiple Shaders in Parallel Non-Destructively

:information_source: Attention Topic was automatically imported from the old Question2Answer platform.
:bust_in_silhouette: Asked By PoisonIvy

I’ve been trying to get into shaders for a little bit now, but I keep running into what feels like a fundamental roadblock. I looked around online and it doesn’t look like this question has been asked before, so it’s entirely possible that I’m coming at this wrong.

My understanding is that, broadly speaking, there’s two ways to utilize shaders in a 3d Godot project.

  1. Apply a spatial shader to some geometry. For example, you could color the faces of a mesh based on its normal vector or displace a plane according to a heightmap texture.
  2. Throw your entire scene into a Viewport and apply a canvas shader to that texture.

And it’s my understanding that postprocessing is intended to be done by multiple nested Viewports, thus allowing for multiple shader passes. This is all well and good for some projects (if I wanted to apply some chromatic aberration and then a vignette, for example), but I’m struggling to get it to work with what I’m trying to do.

For example, I was trying to recreate the visual effect from the Return of the Obra Dinn. The two primary effects are an edge detection pass, and a dithering pass.

I can do the edge detection pass by rendering the depth texture to a quad and positioning that quad in front of the camera. Works great.

And I can do the dithering by throwing the entire scene into a Viewport and performing a dithering pass on the available texture. Also works great.

But if I try to do them both, the edge detection quad destroys all non-edge data before it gets to the dithering step. The dithering pass no longer has access to the camera’s view. It only sees the edge detection displayed by the quad.

What I want to do is run the edge detection and the dithering “separately” somehow, and then combine them.

The only thing I can think of is having two entirely separate scenes, one for dithering, one for edge detection, that pass their textures to a parent Viewport which then combines them. But that feels so incredibly dumb that it can’t be the intended solution, right? I feel like that opens the door for limitless bugs and issues maintaining parity between the scenes.

I can’t do the edge detection in the Viewport shader since you don’t have access to the depth texture from a canvas shader. And I can’t do the dithering from the edge detection shader because the quad blocks the camera’s view, making it impossible to see the actual scene from within the quad.

Any help would be appreciated. Thank you in advance.

I am not sure if I understood correctly - is your final problem about interaction of only canvas shaders ? If so You can use BackBuffer node. Place it inbetween your canvas objects in scene tree hierarchy. It stores the state of viewport before material is applied, so the material after it can use data from the orignal viewport for its render.

Inces | 2022-07-30 14:35