Behavior change: tostring() will start returning shortest precise decimal representation of numbers


We’re going to be making a change to tostring() printing numbers that may result in printing longer output with decimal digits that you haven’t seen before. If you were relying on number-to-string conversions producing up to a given number of decimals, you should probably use string.format("%.1f", v) instead or equivalent (this expression prints up to one decimal digit).

If you want to use the exact same formatting method that we’ve used before as part of tostring, you can use string.format("%.14g", v).

This change is currently enabled in Studio, which should allow you to test your experiences; if you had any UI that displayed decimals then you should test that that UI still displays acceptably short output, and use string.format if it doesn’t.

We plan to enable this change on client/server in live games on February 16th, 4 weeks from now.

UPDATE: This change has been enabled on February 17th (11 AM PST).

We originally didn’t anticipate that this change will have a broad impact so we enabled in on January 13th around 4 PM, but following developer reports that highlighted the regressions in UI we decided to disable it on January 14th around 5 PM.

This change is important because it makes tostring() more reliable, allows round-tripping of numbers through strings with the shortest output, and eliminates confusion around precision issues in floating point computations.

Why do this?

The rest of this post is going to be spent explaining why we can’t have nice things, and why this change was necessary.

Luau works with double-precision numbers that follow IEEE754, which is a standard for floating point computation that is implemented by all modern hardware, and used by all modern programming languages. Double-precision numbers can represent integers exactly up to a rather large limit, but can not represent decimals exactly.

As a result, you may be surprised to learn that, for example, (1.1 - 1.0) * 100 ~= 10. This is not Luau specific - this is how, for better or worse, computers work with numbers.

The problem with tostring was that it would print numbers up to 14 decimal digits, which is simultaneously too many and not enough digits. For example, using the previous example, this code:

print((1.1 - 1.0) * 100, (1.1 - 1.0) * 100 == 10)

Used to print the following non-sensical output before this change:

10 false

How can an expression be simultaneously 10 and not equal to 10? That’s because tostring printed only up to 14 significant digits of a number, so it would sometimes truncate non-zero digits from the output.

With the new change, the code above now prints:

10.000000000000009 false

Which makes sense - it’s now clear that the number is not equal to 10 - and also this matches the behavior of most modern languages, including but not limited to JavaScript, Python, Rust and Swift.

With this change, we now have tostring print the shortest precise decimal representation of the input number. Precise means that two different numbers never result in the same output, and you can recover the original number from the resulting string perfectly - which is to say, tonumber(tostring(v)) == v (unless v is NaN since these don’t compare equal to themselves). Shortest means that the result has the smallest number of decimal digits while still satisfying the precision constraint.

This is much more difficult than it seems. It requires special algorithms to be able to maintain the precise and short output and do it efficiently - in fact, our new implementation is up to 10 times faster than the old implementation depending on the platform/input number, because it uses a novel algorithm developed in the last few years to print numbers correctly and efficiently.

Precision can be vital as it means that, for example, sending the data as a string to the server and then doing the same computation on the server and on the client will return the same results, which was not guaranteed before. tostring is a fundamental language primitive and discarding information as part of that primitive was a bug (that took a while to identify and correct).

My calculation is now imprecise!

It’s tempting to look at an example above and say “this change is wrong because it breaks something that wasn’t broken before; after all, my math worked and now it doesn’t!”. However it’s important to emphasize that this change doesn’t introduce any imprecision - it makes tostring more precise, and as a result, it merely makes the imprecision that always existed visible. This is important because hiding the problem doesn’t make it go away - it may surface in comparisons, such as this code:

local v = (1.1 - 1.0) * 100
if v > 10 then
    print(v) -- used to print 10

or developers assuming decimals “work” which would only be true for small errors, such as this code:

local r = 0
for i=1,1000 do
    r += 0.1
    print(r) -- used to count until i=541 until printing the first "non-round" result 

… which, if you test it on smaller limits of the loop, would lead you to believe that your code works correctly.

So, the new behavior may be surprising at first, but it merely tells you the true value of the numbers you work with, which will affect the behavior of your code, and as such we believe it’s better to be precise and correct than nice.

My code is affected, what do I do?

Having said all that, of course, Roblox games at some point invariably end up displaying numbers to the users, and these numbers should probably be concise. The best option for ensuring this is to have a certain precision in mind (eg you want to have up to two decimal digits) and use string.format like so:

string.format("%.2f", v) -- returns v with up to two decimal digits

If you worked with integers exclusively, then you likely don’t even have this problem to begin with - integer math is precise.

If you worked with arbitrary rational numbers, then you likely already know that this problem exists - after all, 14 digits was way too much for a human to read as well, and 1 / 3 would have returned 0.33333333333333 before (and now returns 0.3333333333333333 with two extra digits), so you’d have to round the result for display.

The most likely cases when this requires you to change your code is twofold:

  1. Your code worked with short decimals such as 0.1 and expected tostring to return a short decimal as well. In this case, your code could have a problem due to accumulated error even before this change, see the for loop example in the previous section; string.format is the correct and comprehensive way to fix it.

  2. Your code anticipated the problem and used a rounding function to fix it but the rounding function was imprecise.

While we recommend using string.format, in case you need to round the number to the closest short decimal and keep it a number, you can use the following function:

local function decimalRound(num, places)
  local power = 10^places
  return math.round(num * power) / power

For example, decimalRound(1 / 3, 2) returns 0.33.

There are subtle variations of this function that are incorrect; for example, if you use math.round(num / 0.01) * 0.01, you will not produce the correct output because of double-rounding error that happens during multiplication (where multiplication loses a bit of precision, but you’re multiplying by 0.01 that doesn’t have a precise equivalent!).


This topic was automatically opened after 9 minutes.

Great change, however one side effect of changes like this is when it affects older games where the developer/s may be inactive and are not notified of the change that could result in unexpected number displays to players.


Well this is a great change but rip to all those simulators that don’t get updated anymore


Thank you so much for letting us know about this change! Players reported some problems this caused (14 Jan), so it would have been nice if we had been told a bit before.

Without any update, some UI started playing up:

Anyway, the details on how to avoid this kind of artifacts will be very useful to fix that, thanks again.


Interesting change, I will probably have to modify some of my code for that, but it’s nice that it’s now more accurate, will definitely be useful in the future !


This is a bad change. If we put a string using tostring() now, does this mean it won’t be normal anymore?

For example, tostring(“69 test”) would return 69 test.

Now, it would return 69.00000000009 test or something instead? This is a bad change imo, unless I read this completely wrong.


I don’t really see why this is a good update, I just see people above saying it’s good without reasons?


A string remains a string, it’s the numbers that display differently now.


I meant that using tostring on a number would display the digits, but not for a string. If you use it with a string as the argument, it is already a string so it would just return the same thing.

tostring(69) may return 69.00000000009.
tostring("69") returns "69".

Notice how in the first example I used a number, but in the second I used a string.


What is the need And benefits for this update? Everyone is saying it’s a good without reasons.

1 Like

Please make sure your comments are actually productive to the discussion. I’m not sure that you actually read the announcement considering your example was a string rather than a number. You also could’ve tested it for yourself and seen that’s not the case since the change is now live in studio, but again, I don’t know that you actually fully read it.


Yes you read this completely wrong. You also didn’t even test it.
tostring("69 test") is still “69 test”
tostring(69) is still 69.

Doesn’t apply at all to integers. It’s simply more precise now when working with floating points / rational numbers and more consistent, because before you could get

local v = (1.1 - 1.0) * 100
if v > 10 then
    print(v) -- used to print 10, which would've confused people why 10 > 10 is true 

(1.1 - 1.0) = 0.1, 0.1 * 100 = 10

it passes the condition of the if statement and prints v as 10, meaning that 10 > 10, obviously 10 isnt greater than 10…because of how computers work with numbers (1.1 - 0.1) * 100 is actually 10.000000000000009, and now it properly displays that which makes debugging things like that if statement easier, (10.000000000000009 > 10, we can see that’s true)

but if you need to truncate to a decimal place to round it as 10 then you can just do

local function decimalRound(num, places)
  local power = 10^places
  return math.round(num * power) / power

print(decimalRound((1.1 - 0.1) * 100, 2)) -- 10

This new update makes total sense for me, however, (1.1 - 1.0) * 100 is not 10.000…009.
This is not related to tostring() at all though, but doesn’t that make it inaccurate? I mean the system behind the math

1 Like

Kind of but it exists everywhere, just computer math and floating point limitations.

1 Like

The announcenent already listed the main reason. tonumber(tostring(n)) == n used to be maybe, but now it’s true 100% of the time. More consistent behavior and no precision being lost between tostring and tonumber type conversations (round-tripping). Please do not say no reasons were given when they definitely were. On the other hand, you haven’t actually specified why you don’t like the change.


Yes, it’s inherently inaccurate due to the way non-integers are internally handled. This problem exists universally on virtually every computing system. The decimal equivalent of 1/3 is equal to 0.333 repeating forever, meaning it’d take an infinite amount of space to store with 100% precision. Of course, that’s impossible, besides also being unnecessary.

Below is a visual example of an internally stored double-precision floating-point number (usually shortened to just “double”), which is what Lua uses for all numbers. The link is for floating-point numbers in general rather than specifically just doubles.

Floating-point arithmetic - Wikipedia


It isn’t, but it’s a tradeoff: You’re getting more range of representable numbers in exchange for being able to exactly represent fewer human readable numbers. For better or worse that’s the tradeoff that the programming profession as a whole has chosen to make in most cases.

There are domains where you take a different approach. For instance, any system working with money / currencies that never wants to “lose track” of any money typically uses a fixed-point representation, where you you basically store the number multiplied by 10000 to give it exactly 4 decimal places worth of precision.

However if you use such a fixed point representation, then the largest number you can store also becomes 4 digits shorter, and you can’t represent very small or very large numbers in a fixed amount of space at all. That’s fine in a monetary context where only so much money exists… but in a game engine sometimes someone wants to put a planet object out at 1000000 studs or micro-adjust the size of their object by 0.00001 studs to make it fit somewhere and expects it to work reasonably well and performantly.


Like in the OP, use string.format to get the exact display you want. E.G. string.format("%.3f", 1.123456)" will result in 1.123


could this be something that can be set by developer?
many older games that are not updated will become weird