Attention | Topic was automatically imported from the old Question2Answer platform. | |
Asked By | Arron Washington | |
Old Version | Published before Godot 3 was released. |
I had a problem earlier in Godot, wherein it was taking me several ms (almost 10!) to instance a simple scene. I eventually discovered the root cause: instancing nodes from a thread seemed to incur a serious performance penalty. Using my network client as an example:
const MESSAGE_RECEIVED = "message_received"
var thread = Thread.new()
func start():
thread.start(self, '_run', null)
func _run():
while true:
var msg = _block_while_waiting_for_tcp_message()
emit_signal(MESSAGE_RECEIVED, msg)
Code that connected to MESSAGE_RECEIVED
and instanced a scene based on the data in the message took anywhere from 10-30ms to instance an object. Instancing 500 scenes like this took about 11 seconds.
However, changing this line:
emit_signal(MESSAGE_RECEIVED, msg)
to this:
call_deferred("emit_signal", MESSAGE_RECEIVED, msg)
gave me the expected result, and was able to render all 500 scenes within 100ms.
My assumption is that calling PackedScene.instance()
inside of a thread incurs a context switch (background thread → main thread) before the scene is instanced.
call_deferred
on the other hand might simply perform that context switch once, then process all of its messages, which would explain the great improvement performance-wise. Docs are a little scare on the semantics of all the function calls involved, though.
Does anyone know if my understanding of call_deferred
is correct?