Inconsistent size of floats?

:information_source: Attention Topic was automatically imported from the old Question2Answer platform.
:bust_in_silhouette: Asked By cherryblossom

Hello everyone.

In the binary serialization api, it is mentoined that floats use 32bits of memory.
But when I use var2bytes on said floats, they often return a 12(4 bytes are used for the type header) byte long PoolByteArray, which would imply that floats use 8 bytes, thus double precision.
Actually, it happens every time my float isn’t an integer. Only when my float is an integer, will the PoolByteArray be 8 bytes in length.

Now, my initial thought was, that floats were changed to use up 8 bytes and that integers are automatically converted to it’s type, even when trying to force a cast to float on it.
But then I realized, that the type header reads 3 on integer floats, which implies that they actually ARE floats.
Furthermore I also noticed that the 64bit float not only contains 3 in it’s type header, but also a bit set to 1 on the 17th(or 24th?) position, which I guess is a flag set to true on double precision floats?

Well, that’s really unconvenient for me, as I’m trying to broadcast data over my network in snapshots. To reduce bandwith, I’m stripping the PoolByteArrays off their type header, since I supposedly know what type I’m dealing with on the receiving end.#
Which I do, but the variable size of floats really messes with that.
The only workaround is to add a very small number to my floats(something like 0.00000000001), to force using 64bit floats.
The thing is, I don’t even want to use 64bit floats, but 32bit instead.

I also find it really weird, that floats switch to double precision, as soon as they aren’t integers anymore.

Best regards.

:bust_in_silhouette: Reply From: wombatstampede

You will find the c++ source code which is used by var2bytes here:

The flag you’re referring to is declared near the top:

#define ENCODE_FLAG_64 1 << 16 

Then a bit further down inside encode_variant you’ll find:

	case Variant::REAL: {

		double d = p_variant;
		float f = d;
		if (double(f) != d) {
			flags |= ENCODE_FLAG_64; //always encode real as double
		}
	} break; 

So this code checks if the exact value of the source value is the same in float and double. If it is not then - I guess - the code assumes that the precision of float isn’t sufficient and switches to double instead. It perhaps isn’t what you want and it may not be the optimum but it is consistent.

If you remove values from the encoded data you will always risk broken data. If not now then maybe in a future version when the var2bytes code is changed again.

Anyway perhaps you could encode multiple values from a PoolRealArray. It only has one type header and one array length field for all encoded values. And the values themselves are (currently!) encoded as 32-bit floats (as I understand from the code I linked above).

Ah thank you very much! That makes sense.
I suppose the best solution is to encode PoolArrays for every to be broadcasted type into a single PoolByteArray, as this will probably not have much overhead, depending on the packet size of course.
Or the slightly more messy version, to only store the floats in a PoolRealArray and leaving the rest as it is(which actually doesn’t seem all too bad).

My implementation right now pretty much allows me to only change the encoding/decoding part without messing up the rest, so I might just end up writing a GDnative script for it somewhen later.
Or a module, but I really wanna avoid that, as I kind of want to keep my Godot installation “clean”.

cherryblossom | 2019-10-28 12:00