Behavior change: tostring() will start returning shortest precise decimal representation of numbers

Yes you read this completely wrong. You also didn’t even test it.
tostring("69 test") is still “69 test”
tostring(69) is still 69.

Doesn’t apply at all to integers. It’s simply more precise now when working with floating points / rational numbers and more consistent, because before you could get

local v = (1.1 - 1.0) * 100
if v > 10 then
    print(v) -- used to print 10, which would've confused people why 10 > 10 is true 
end

(1.1 - 1.0) = 0.1, 0.1 * 100 = 10

it passes the condition of the if statement and prints v as 10, meaning that 10 > 10, obviously 10 isnt greater than 10…because of how computers work with numbers (1.1 - 0.1) * 100 is actually 10.000000000000009, and now it properly displays that which makes debugging things like that if statement easier, (10.000000000000009 > 10, we can see that’s true)

but if you need to truncate to a decimal place to round it as 10 then you can just do

local function decimalRound(num, places)
  local power = 10^places
  return math.round(num * power) / power
end

print(decimalRound((1.1 - 0.1) * 100, 2)) -- 10
18 Likes

This new update makes total sense for me, however, (1.1 - 1.0) * 100 is not 10.000…009.
This is not related to tostring() at all though, but doesn’t that make it inaccurate? I mean the system behind the math

1 Like

Kind of but it exists everywhere, just computer math and floating point limitations.

1 Like

The announcenent already listed the main reason. tonumber(tostring(n)) == n used to be maybe, but now it’s true 100% of the time. More consistent behavior and no precision being lost between tostring and tonumber type conversations (round-tripping). Please do not say no reasons were given when they definitely were. On the other hand, you haven’t actually specified why you don’t like the change.

5 Likes

Yes, it’s inherently inaccurate due to the way non-integers are internally handled. This problem exists universally on virtually every computing system. The decimal equivalent of 1/3 is equal to 0.333 repeating forever, meaning it’d take an infinite amount of space to store with 100% precision. Of course, that’s impossible, besides also being unnecessary.

Below is a visual example of an internally stored double-precision floating-point number (usually shortened to just “double”), which is what Lua uses for all numbers. The link is for floating-point numbers in general rather than specifically just doubles.
618px-IEEE_754_Double_Floating_Point_Format.svg

Floating-point arithmetic - Wikipedia

3 Likes

It isn’t, but it’s a tradeoff: You’re getting more range of representable numbers in exchange for being able to exactly represent fewer human readable numbers. For better or worse that’s the tradeoff that the programming profession as a whole has chosen to make in most cases.

There are domains where you take a different approach. For instance, any system working with money / currencies that never wants to “lose track” of any money typically uses a fixed-point representation, where you you basically store the number multiplied by 10000 to give it exactly 4 decimal places worth of precision.

However if you use such a fixed point representation, then the largest number you can store also becomes 4 digits shorter, and you can’t represent very small or very large numbers in a fixed amount of space at all. That’s fine in a monetary context where only so much money exists… but in a game engine sometimes someone wants to put a planet object out at 1000000 studs or micro-adjust the size of their object by 0.00001 studs to make it fit somewhere and expects it to work reasonably well and performantly.

13 Likes

Like in the OP, use string.format to get the exact display you want. E.G. string.format("%.3f", 1.123456)" will result in 1.123

2 Likes

could this be something that can be set by developer?
many older games that are not updated will become weird

3 Likes

I see a lot of people concerned about old games and the function being changed. Is there a way to opt into using the new method by doing tostring(x, true)? It may be a pain but at least most games can stay working the way they are while also allowing for the people who need EXACT precision to get their way.

I don’t mind the change but its always a hard acceptance for a lot of people, and impossible for the old games that are no longer maintained but stuck in archive purgatory. Anyway, thanks for the info and I am personally glad that it is now accurate :+1:

4 Likes

That would be just pure API bloat. If it’s really that important it would be a lot better with a dropdown called “StringPrecision” same thing as some opt-in beta features have.

1 Like

@zeuxcg
As somebody who really wants this capability, I really disagree with this change, at least, in the fact that it has overwritten the default tostring behaviour.

I really think that the tostring behaviour before this change, while slower, was actually a much better option, and much more desirable, because it has always acted as error correction in the exact ways that I need it to: readability. Yes, I lose out on that tiny little dot of precision, but, I never care about that when I want to convert a float to a readable string. Error correction is extremely useful for maintaining readability and being able to understand where my math is going wrong, and also being able to understand mathematical operations. Now, if I want to do math and have it just work, I simply cannot, I am forced to introduce implementation specific corrections, which string.format does not solve!

This is not good for beginners, or for me. It’s confusing, and difficult. As others brought up, floating point error is a limitation, not a feature.

Secondly, this is almost entirely just an advanced feature, and it’s a breaking one! As such, I think that it should be offered separately. This simply doesn’t make anything I do commonly easier (or reasonably faster!), it only makes readability harder, and the cases where I will want this are limited to advanced uses and in depth debugging.

And, as much as I love optimization and luau getting faster, faster tostring on numbers like this also just isn’t a good justification in my opinion, because tostring isn’t for fast code, tostring is for readability. It’s bad practice to leave tostring or print inside of very large math loops, and if I want to see or use a number’s full floating point value, I don’t want to use tostring for this.

I have been using math.floor rounding my entire time programming, because that’s what I was taught. I was taught this by the Roblox wiki, and by the many many pieces of code I had available to me on Roblox when I started. And these practices continue to carry on still. Even if they are incorrect, older tostring does a good job at correcting them.

And, I know this comes up every time a breaking change happens, but, I can also see it breaking older Roblox games, because, well, older Roblox games are notorious for not following good practices. Games being buggy doesn’t mean they aren’t still fun and I want to play them, it doesn’t mean they aren’t profitable either, because they are usually the one place I spend my money, and for one simple reason: The game doesn’t push me, it respects me, so I feel more compelled to respect the game’s developer, and, I get a lot more out of my money that way.

14 Likes

As many people are arguing it ruins their code etc I believe it would be fair to make it a option per game. This would solve a lot of arguing and make the behavior what you want.

2 Likes

But, as I understand it, lua’s tostring is for readability, not for exact representation. When you call it on a majority of data types, it doesn’t return an exact representation of that data type, and it should not ever do that if it sacrifices readability, unless the developer makes that choice. The __tostring metamethod also completely goes against exact representation.

tostring isn’t for exactness, it’s for readability, and that’s a big reason why I think this tradeoff is the wrong way to go here, even if it’s faster. I do not want to do my math correctly and see floating point error and immediately worry I have messed up. I do not want to be forced to use implementation specific fixes for the problems that exactness introduces. I want tostring to give me the wrong results.

This simply doesn’t help me as a developer by being the behaviour of tostring, and it’s confusing for new and old programmers, and most importantly, it’s a breaking change.

5 Likes

This is a good change, as floating point imprecision has always been around, and its better to have an accurate conversion of the true value, than some auto rounding results.

2 Likes

Just FYI you should still use math.floor for “rounding” in most contexts. math.round rounds for textual display purposes, but if you want to round for some geometric purpose like grid snapping then math.floor (or ceil) is the correct function to use.

2 Likes

Just to be clear, the change isn’t motivated by performance, it’s motivated by correctness; performance is a nice bonus. It is possible to reject extra digits within the new algorithm to match the old behavior better, although that would defeat the purpose in that tostring() will stop being accurate.

Floating-point errors are not a feature - that is correct. However, note that in many cases you can’t be unaware of their existence. Should a == b compare with a builtin tolerance? Should math.floor(4.999999999) return 5? This road is fraught with peril; the consistent way to treat floating point numbers is to have functions actually specify their results according to the rules of floating point arithmetics, not trying to paper over the internal semantics.

Maybe there’s an argument to be made that tostring should by default use “appropriately few” digits for human consumption. But how few is few enough? 14 digits is too many for human consumption, so why not 10 or 6? Maybe 3 is a good number? The core problem is that the only good default is “print the exact number”, and everything else is specific to the application.

18 Likes

Also, for anyone who really wants to dig into the technical details of this topic, I would highly recommend watching the first half of this talk (the second half is C++ specific details but the first half is general info), which goes into all the gotchas behind string↔float conversion in exhaustive detail: Stephan T. Lavavej “Floating-Point <charconv>: Making Your Code 10x Faster With C++17's Final Boss” - YouTube

9 Likes

Since math.floor came up, this is what the behavior before this change looks like for code that uses math.floor:

> print((1.4-0.4)*100)
100
> print(math.floor((1.4-0.4)*100))
99

math.floor is not really at fault here of course - it’s just that the result of this expression is not 100, it’s less than 100, but the incorrectly rounded output makes it non-obvious.

The only way to make Luau consistent wrt number handling is to leave the rounding to the developer - which is context-specific and string.format is easy to use for that purpose.

4 Likes

While I kind of see the reason behind it, tostring was always meant as a quick no fuss solution to turning numbers into strings. If a number is close enough to an integer where the difference is negligible, then most people wouldn’t want 10 decimals filling up their output. If you really wanted this precision, string.format always existed.

2 Likes

Really happy to see this change, it’s about time that tostring(number) show what’s actually being stored in the number variable, rather than having to write our percise tostring alternative. Formatting less digits is super easy, formatting percise minimilistic output by default is nice.

I hate when floating point error is obscured from output making output tell one story but the underlying values tell another. A small change but very welcomed.

2 Likes