Definitely surprised this was initially rolled out without warning as they didn’t forsee impact, given this will mess with UI in alot of games, but definitely a logical change to make.
Infact it seems strange to me this wasn’t the case initially.
I wonder what chain of events led to this strange functionality.
Vector3 conversion uses the format string %.9g
for the vector components, where %g
means "the shorter of %f
/ %e
, and AFAIK exactly how %f
works isn’t strictly defined in the C specification. So to summarize, tostring
on a Vector3:
- Isn’t affected by this change.
- But has always had the potential to vary slightly between platforms.
Is using math.round()
before chaning a number to it’s string variant going to bring the same result as before? UI designer here, take it easy on me…
Yeah, math.round
returns an integer number so using tostring
on the result will continue to return the number without any extra decimals.
It’s a nice change for more precise computation, and in the future you could’ve gotten a really weird bug due to how floating point numbers are calculated which you wouldn’t know how to fix, but because of this update you won’t. It’s an edge case, but it’s still more precise and up to the standards of modern languages.
As long as your number is a whole, yes. Previously, using tostring on something like 1.2 would return “1.2”. Math.round will round it down to 1.
I can live with this change, but I completely fail to see why it had to replace tostring(). You could easily just give tostring an extra parameter to specify if you want to see numbers precisely, and have prints use that.
If I want to use tostring for precision, great! The option would be there now. But why do we have to completely get rid of the old behavior when so many games rely on it behaving the way it already does? Seems kind of strange considering how roblox usually takes that kind of approach on other things, like bodymovers not being replaced when constraints were added.
how is returning a wrong number more precise, this is gonna make things less accurate for when trying to print out values and finding mistakes in codes
It isn’t printing the wrong number, it’s the number Roblox itself “sees”. For example, printing “0.1 + 0.2” would print “0.3”, so the logical assumption would be that 0.1 + 0.2 == 0.3, but it then illogically would say that’s false when doing print(0.1 + 0.2, 0.1 + 0.2 == 0.3)
. But now, 0.1 + 0.2 is printed as 0.30000000000000004.
Completely broke one of my projects, but thanks for letting us know about this change lol
(P.s. I fixed everything successfully, no worries)
It’s not Logic for print(1.1 -1.0) to return 0.10000000000000009, that isn’t math.
But it WOULD be a logical assumption if printing 1.1 - 1.0 is shown as 0.1, then printing 1.1 - 1.0 == 0.1 would print true, but it will print FALSE. Before this change, it would seem like Roblox was messing with you, saying that 0.1 wasn’t 0.1, but now it shows that it wasn’t 0.1 in the first place.
It would, because computers don’t represent numbers the way we think. They use mantissa to represent numbers, which is useful for storing numbers in memory efficiently, but causes these downsides as a drawback.
Should there be a different behavior for 32 bit float values? It could round to 0.1 because it doesn’t have double precision.
print(Vector3.new(.1, 0, 0))
> 0.10000000149011612, 0, 0
Yeah I don’t think the current Vector3 behavior makes sense, thanks for flagging this - we’ll revisit this. This was wrong before as well as after:
new:
12:22:37.431 > print(Vector3.new(0.1, 0, 0)) - Studio
12:22:37.431 0.10000000149011612, 0, 0 - Edit
old:
12:22:59.099 > print(Vector3.new(0.1, 0, 0)) - Studio
12:22:59.099 0.10000000149012, 0, 0 - Edit
For anyone who is confused or curious as to why seemingly simple math gives an “incorrect” result in Luau (and computers in general), as noted in this post, I have made a hopefully helpful demonstration and explanation why:
Number representations and fractions
Computers store numbers in binary (base 2) format only. As there is only a finite amount of memory, there is a finite number of bits (binary digits) that can be stored.
Think back to school math class when you learned about fractions (and converting them). As you probably know, there are some fractions that cannot be written perfectly as decimal numbers such as:
1/3 = 0.33333333333333...
2/3 = 0.66666666666666...
Those decimal digits will go on forever. You only have a finite size of paper that you can write on (and you have better things to do), therefore you have to round the number for the sake of practicality and at the cost of precision. Most of the time, you will simply state the fraction 1/3
as 0.3 repeating
.
All of these concepts also extend to the world of binary numbers. In fact, this extends to any base number system such as hexadecimal (base 16) and octal (base 8).
Why does the precision error occur?
These errors occur when a number is stored in binary form that would actually take an infinite amount of bits to represent perfectly (like 1/3
). The computer will round (truncate) the bits so it can store them practically. The issue arises when this number is then converted back to decimal from binary to display on the screen or do more math on. Since the number was rounded, it is technically not the same number as the original or what you expect.
Demonstration
Here is a demonstration of converting the number 0.2
to binary form for the math people out there. As you will see, it is not possible and rounding is required. You will see when I convert a rounded version back to decimal form, I get 0.19921875
which is close to but not equal to the expected 0.2
.
The left side shows the steps of converting 0.2
to binary, and the right side converts a truncated version of this binary number (with a precision of 8 bits chosen for simplicity) back to decimal:
NOTE: This image above is not how fractions are actually converted or stored in real computers. For simplicity reasons, this is how you do it on paper and still gets the point across. Computers and languages usually use the IEEE 754 standard for representation.
Could someone please clarify if this update is not solely a behavior change? After reading, it feels more like an engine update for floating point value, but indirectly stated in the OP.
This is not changing anything about floating math itself, it was always this imprecise. The only thing that is changing is the way tostring()
interacts with these numbers.
i hate this. this ruins everything for me and many other devs. I need tostring to make the number into a string. NOTHING else. make it an extra argument or something. Please make a way to revert this please
You can just use string.format
as stated in the post.