TL;DR
We’re going to be making a change to tostring()
printing numbers that may result in printing longer output with decimal digits that you haven’t seen before. If you were relying on number-to-string conversions producing up to a given number of decimals, you should probably use string.format("%.1f", v)
instead or equivalent (this expression prints up to one decimal digit).
If you want to use the exact same formatting method that we’ve used before as part of tostring, you can use string.format("%.14g", v)
.
This change is currently enabled in Studio, which should allow you to test your experiences; if you had any UI that displayed decimals then you should test that that UI still displays acceptably short output, and use string.format if it doesn’t.
We plan to enable this change on client/server in live games on February 16th, 4 weeks from now.
UPDATE: This change has been enabled on February 17th (11 AM PST).
We originally didn’t anticipate that this change will have a broad impact so we enabled in on January 13th around 4 PM, but following developer reports that highlighted the regressions in UI we decided to disable it on January 14th around 5 PM.
This change is important because it makes tostring()
more reliable, allows round-tripping of numbers through strings with the shortest output, and eliminates confusion around precision issues in floating point computations.
Why do this?
The rest of this post is going to be spent explaining why we can’t have nice things, and why this change was necessary.
Luau works with double-precision numbers that follow IEEE754, which is a standard for floating point computation that is implemented by all modern hardware, and used by all modern programming languages. Double-precision numbers can represent integers exactly up to a rather large limit, but can not represent decimals exactly.
As a result, you may be surprised to learn that, for example, (1.1 - 1.0) * 100 ~= 10
. This is not Luau specific - this is how, for better or worse, computers work with numbers.
The problem with tostring
was that it would print numbers up to 14 decimal digits, which is simultaneously too many and not enough digits. For example, using the previous example, this code:
print((1.1 - 1.0) * 100, (1.1 - 1.0) * 100 == 10)
Used to print the following non-sensical output before this change:
10 false
How can an expression be simultaneously 10 and not equal to 10? That’s because tostring
printed only up to 14 significant digits of a number, so it would sometimes truncate non-zero digits from the output.
With the new change, the code above now prints:
10.000000000000009 false
Which makes sense - it’s now clear that the number is not equal to 10 - and also this matches the behavior of most modern languages, including but not limited to JavaScript, Python, Rust and Swift.
With this change, we now have tostring
print the shortest precise decimal representation of the input number. Precise means that two different numbers never result in the same output, and you can recover the original number from the resulting string perfectly - which is to say, tonumber(tostring(v)) == v
(unless v
is NaN since these don’t compare equal to themselves). Shortest means that the result has the smallest number of decimal digits while still satisfying the precision constraint.
This is much more difficult than it seems. It requires special algorithms to be able to maintain the precise and short output and do it efficiently - in fact, our new implementation is up to 10 times faster than the old implementation depending on the platform/input number, because it uses a novel algorithm developed in the last few years to print numbers correctly and efficiently.
Precision can be vital as it means that, for example, sending the data as a string to the server and then doing the same computation on the server and on the client will return the same results, which was not guaranteed before. tostring
is a fundamental language primitive and discarding information as part of that primitive was a bug (that took a while to identify and correct).
My calculation is now imprecise!
It’s tempting to look at an example above and say “this change is wrong because it breaks something that wasn’t broken before; after all, my math worked and now it doesn’t!”. However it’s important to emphasize that this change doesn’t introduce any imprecision - it makes tostring
more precise, and as a result, it merely makes the imprecision that always existed visible. This is important because hiding the problem doesn’t make it go away - it may surface in comparisons, such as this code:
local v = (1.1 - 1.0) * 100
if v > 10 then
print(v) -- used to print 10
end
or developers assuming decimals “work” which would only be true for small errors, such as this code:
local r = 0
for i=1,1000 do
r += 0.1
print(r) -- used to count until i=541 until printing the first "non-round" result
end
… which, if you test it on smaller limits of the loop, would lead you to believe that your code works correctly.
So, the new behavior may be surprising at first, but it merely tells you the true value of the numbers you work with, which will affect the behavior of your code, and as such we believe it’s better to be precise and correct than nice.
My code is affected, what do I do?
Having said all that, of course, Roblox games at some point invariably end up displaying numbers to the users, and these numbers should probably be concise. The best option for ensuring this is to have a certain precision in mind (eg you want to have up to two decimal digits) and use string.format
like so:
string.format("%.2f", v) -- returns v with up to two decimal digits
If you worked with integers exclusively, then you likely don’t even have this problem to begin with - integer math is precise.
If you worked with arbitrary rational numbers, then you likely already know that this problem exists - after all, 14 digits was way too much for a human to read as well, and 1 / 3
would have returned 0.33333333333333
before (and now returns 0.3333333333333333
with two extra digits), so you’d have to round the result for display.
The most likely cases when this requires you to change your code is twofold:
-
Your code worked with short decimals such as
0.1
and expectedtostring
to return a short decimal as well. In this case, your code could have a problem due to accumulated error even before this change, see the for loop example in the previous section;string.format
is the correct and comprehensive way to fix it. -
Your code anticipated the problem and used a rounding function to fix it but the rounding function was imprecise.
While we recommend using string.format
, in case you need to round the number to the closest short decimal and keep it a number, you can use the following function:
local function decimalRound(num, places)
local power = 10^places
return math.round(num * power) / power
end
For example, decimalRound(1 / 3, 2)
returns 0.33
.
There are subtle variations of this function that are incorrect; for example, if you use math.round(num / 0.01) * 0.01
, you will not produce the correct output because of double-rounding error that happens during multiplication (where multiplication loses a bit of precision, but you’re multiplying by 0.01
that doesn’t have a precise equivalent!).