Hi community :)
With respect to the IEEE Standard for Floating-Point Arithmetic (also known as IEEE 754) there are several types of floating-point numbers like float (32 bit) and double (64 bit) where float is single precision and double is double precision.
The standard mentions that float is able to represent ~7.2 decimal digits and double is able to represent ~15.9 decimal digits. And that is where I'm a little confused about how Godot represents floating-point numbers.
In GDScript I try to compute 1.0/3.0 and print the result with 16 decimal digits to check the amount of significant digits of floating-point numbers in Godot. This is the code of interest:
var result = 1.0/3.0
print("%.16f" % result)
The output of the above code is:
If I try the same computation in Java or C++ I always get:
0.33333334 (in case of float)
0.3333333333333333 (in case of double)
So why does Godot represent floating-point numbers with 12 significant digits? This is not what I would expect with respect to IEEE 754.
Can anyone explain the above result? Or is the
print() function the problem?
Here is some data to my OS and the used Godot version:
OS: Windows 8.1, 64 Bit
Godot version: 3.0.2 stable
Thank you :)