+1 vote

I'm making a 2D first person dungeon crawler, and while the images in game have been hand drawn so far, I'm experimenting with drawing the walls of the dungeon in code via draw_polygon.

My problem is that polygons drawn at particular angles shear the images in such a way as to make them unusable to me if the walls have any textures on them: https://imgur.com/a/JYT5lUy

My best guess is that the way godot handles filling a polygon with textures is to create some sort of triangles within the polygon, and the texture is then fitted to them, with these wall polygons having two triangles.

My first instinct is to instead of using one polygon per wall piece, use multiples to reduce the appearance of the shearing, but that would take a very large amount of work, and probably would also be fairly resource intensive. I'm wondering if there's a better way I'm just not thinking of?

in Engine by (15 points)

You are correct that this happens due to the triangulation. The UV is linearly interpolated for each triangle. Instead, you would need to have some third value (you can imagine it as depth factor) and interpolate linearly over this vec3 instead of the vec2 UV. With that the fragment shader can do xy / y to get a perspective correct UV value.

One way to do this would be to use quadrilateral interpolation: https://www.reedbeta.com/blog/quadrilateral-interpolation-part-1/

Though in your usecase that isn't even needed as you have a few helpful constraints (top edge is parallel to bottom edge, height kinda can be emulated). So instead of using the distance to the diagonal (like in the article I linked) you can calculate the depth factor by using 0 as height for the "lower" edge and the texture height for the "upper" edge of your parallelogram. Lower and upper here means how it looks in perspective, lower being the edge on the ground. The depth factor would be something like

camera_height / (camera_height - vertex_height)

Camera height will kinda determine how strong the perspective effect is and must be higher than the vertex height, while the vertex height should represent the vertices simulated height (derived from upper/lower edge as noted above).

I did implement this some weeks ago (fake 3d buildings, this is pure 2d) but I am not at home currently. Will send a code example later once I am home. (so only a comment for now, once I can copy some example code from my project I will write a proper answer)

Very interesting! Thanks for the reply.

I haven't touched shaders before, but this situation is definitely looking like a good excuse to figure out how they work. (or I could do the smart thing and just draw the cubes in 3d, but where's the fun in that? xD)

2 Answers

+1 vote

This is because texture rendering in 2D is not perspective-correct. It was never designed to be used for this purpose, and Godot's 2D engine in general aims to behave like a true 2D engine – for instance, it doesn't have a depth buffer. It may be possible to abuse 2D shaders to make the lack of perspective-correct texturing look less noticeable, but this may not be easy. See Perspective grid animated shader and Perspective Warp/Skew shader for inspiration.

If you need perspective-correct rendering without too much hassle, I would recommend rendering the 3D game world using a viewport, or switching to 3D altogether.

by (12,327 points)
edited by

Ah, good to know that's just how 2D works. And thank you very much for answering.

Just using cubes drawn in the 3D mode would probably be the smart thing to do here since I've never touched shaders before, but that actually sounds like something that would be really interesting to learn about...

Thank you for all the links, you've been a huge help!

+2 votes

Completely forgot to post my shader yesterday. Anyway, here it is:

shader_type canvas_item;

uniform vec2 camera_position = vec2(0, 128);
uniform float camera_height = 512.0;

varying vec3 uvd;

void vertex() {
    // using VERTEX.y as height component:
    vec3 position_3d = vec3(VERTEX, 0.0);
    // perspective line only on 2d xz plane (3d simply not needed)
    vec2 perspective_line = position_3d.xz - camera_position;
    float height_diff = camera_height - position_3d.y;
    float depth_factor = camera_height / height_diff;
    // calc the vertex pos based on the perspective. moves along
    // the perspective line outgoing from the camera pos.
    VERTEX = camera_position + perspective_line * depth_factor;
    // the important part to prevent shearing: multiplying the UV
    // by the depth factor to be able to linearly interpolate over
    // the 3d vector and then reconstructing the corrected UV in
    // the fragment stage:
    uvd = vec3(UV, 1.0) * depth_factor;
}

void fragment() {
    vec2 corrected_uv = uvd.xy / uvd.z;
    COLOR = texture(TEXTURE, corrected_uv);
}

It needs to be set as a shader material (down in the "CanvasItem" section in the inspector) for a 2d sprite (for example) or any other canvas item derived node.
My Sprite has the following options set:

  • not centered (otherwise part of the node will look like it is "under ground" after the perspective transform)
  • flip V (I use UV.y = 0 as the indicator for the "lower" edge of the sprite)
  • offset can be kept at 0, 0. changing the x value will move the sprite left/right. changing the y value will move the sprite perspectively up/down. this can have some desirable effects

Currently it's impossible to get the global position of a node in canvas item shaders. Instead I pass in the local position of the camera as a uniform. For this you need to have a unique instance of the material per node. I use the following script to set the uniform:

extends Sprite

export var camera: NodePath
onready var _cam: Camera2D = get_node(camera)

func _process(_delta: float) -> void:
    var camera_position := to_local(_cam.global_position)
    material.set_shader_param('camera_position', camera_position)

In theory one could handle the perspective completely in view space. The world matrix could be used to do this. I just was too lazy to do so.

oh and you may need to calculate a custom z index. 2d rendering has no depth buffer, so objects are drawn in natural order. this is bad for 3d as things can overlap each other in different ways based on perspective. One way to fake this is to calculate the z index as follows each frame (it uses the same local camera pos as the script above, so you can simply add this line to the script if you need it):

z_index = int(max(screen_max - camera_position.length(), 0))
by (56 points)
edited by

Very interesting, thank you for providing your code example! It's definitely something I'll study. Shaders are a really interesting topic, and there's clearly a lot I can do with them.

Thank you for giving such a detailed response

Welcome to Godot Engine Q&A, where you can ask questions and receive answers from other members of the community.

Please make sure to read Frequently asked questions and How to use this Q&A? before posting your first questions.
Social login is currently unavailable. If you've previously logged in with a Facebook or GitHub account, use the I forgot my password link in the login box to set a password for your account. If you still can't access your account, send an email to webmaster@godotengine.org with your username.