Behavior change: tostring() will start returning shortest precise decimal representation of numbers

I see a lot of people concerned about old games and the function being changed. Is there a way to opt into using the new method by doing tostring(x, true)? It may be a pain but at least most games can stay working the way they are while also allowing for the people who need EXACT precision to get their way.

I don’t mind the change but its always a hard acceptance for a lot of people, and impossible for the old games that are no longer maintained but stuck in archive purgatory. Anyway, thanks for the info and I am personally glad that it is now accurate :+1:


That would be just pure API bloat. If it’s really that important it would be a lot better with a dropdown called “StringPrecision” same thing as some opt-in beta features have.

1 Like

As somebody who really wants this capability, I really disagree with this change, at least, in the fact that it has overwritten the default tostring behaviour.

I really think that the tostring behaviour before this change, while slower, was actually a much better option, and much more desirable, because it has always acted as error correction in the exact ways that I need it to: readability. Yes, I lose out on that tiny little dot of precision, but, I never care about that when I want to convert a float to a readable string. Error correction is extremely useful for maintaining readability and being able to understand where my math is going wrong, and also being able to understand mathematical operations. Now, if I want to do math and have it just work, I simply cannot, I am forced to introduce implementation specific corrections, which string.format does not solve!

This is not good for beginners, or for me. It’s confusing, and difficult. As others brought up, floating point error is a limitation, not a feature.

Secondly, this is almost entirely just an advanced feature, and it’s a breaking one! As such, I think that it should be offered separately. This simply doesn’t make anything I do commonly easier (or reasonably faster!), it only makes readability harder, and the cases where I will want this are limited to advanced uses and in depth debugging.

And, as much as I love optimization and luau getting faster, faster tostring on numbers like this also just isn’t a good justification in my opinion, because tostring isn’t for fast code, tostring is for readability. It’s bad practice to leave tostring or print inside of very large math loops, and if I want to see or use a number’s full floating point value, I don’t want to use tostring for this.

I have been using math.floor rounding my entire time programming, because that’s what I was taught. I was taught this by the Roblox wiki, and by the many many pieces of code I had available to me on Roblox when I started. And these practices continue to carry on still. Even if they are incorrect, older tostring does a good job at correcting them.

And, I know this comes up every time a breaking change happens, but, I can also see it breaking older Roblox games, because, well, older Roblox games are notorious for not following good practices. Games being buggy doesn’t mean they aren’t still fun and I want to play them, it doesn’t mean they aren’t profitable either, because they are usually the one place I spend my money, and for one simple reason: The game doesn’t push me, it respects me, so I feel more compelled to respect the game’s developer, and, I get a lot more out of my money that way.


As many people are arguing it ruins their code etc I believe it would be fair to make it a option per game. This would solve a lot of arguing and make the behavior what you want.


But, as I understand it, lua’s tostring is for readability, not for exact representation. When you call it on a majority of data types, it doesn’t return an exact representation of that data type, and it should not ever do that if it sacrifices readability, unless the developer makes that choice. The __tostring metamethod also completely goes against exact representation.

tostring isn’t for exactness, it’s for readability, and that’s a big reason why I think this tradeoff is the wrong way to go here, even if it’s faster. I do not want to do my math correctly and see floating point error and immediately worry I have messed up. I do not want to be forced to use implementation specific fixes for the problems that exactness introduces. I want tostring to give me the wrong results.

This simply doesn’t help me as a developer by being the behaviour of tostring, and it’s confusing for new and old programmers, and most importantly, it’s a breaking change.


This is a good change, as floating point imprecision has always been around, and its better to have an accurate conversion of the true value, than some auto rounding results.


Just FYI you should still use math.floor for “rounding” in most contexts. math.round rounds for textual display purposes, but if you want to round for some geometric purpose like grid snapping then math.floor (or ceil) is the correct function to use.


Just to be clear, the change isn’t motivated by performance, it’s motivated by correctness; performance is a nice bonus. It is possible to reject extra digits within the new algorithm to match the old behavior better, although that would defeat the purpose in that tostring() will stop being accurate.

Floating-point errors are not a feature - that is correct. However, note that in many cases you can’t be unaware of their existence. Should a == b compare with a builtin tolerance? Should math.floor(4.999999999) return 5? This road is fraught with peril; the consistent way to treat floating point numbers is to have functions actually specify their results according to the rules of floating point arithmetics, not trying to paper over the internal semantics.

Maybe there’s an argument to be made that tostring should by default use “appropriately few” digits for human consumption. But how few is few enough? 14 digits is too many for human consumption, so why not 10 or 6? Maybe 3 is a good number? The core problem is that the only good default is “print the exact number”, and everything else is specific to the application.


Also, for anyone who really wants to dig into the technical details of this topic, I would highly recommend watching the first half of this talk (the second half is C++ specific details but the first half is general info), which goes into all the gotchas behind string↔float conversion in exhaustive detail: Stephan T. Lavavej “Floating-Point <charconv>: Making Your Code 10x Faster With C++17's Final Boss” - YouTube


Since math.floor came up, this is what the behavior before this change looks like for code that uses math.floor:

> print((1.4-0.4)*100)
> print(math.floor((1.4-0.4)*100))

math.floor is not really at fault here of course - it’s just that the result of this expression is not 100, it’s less than 100, but the incorrectly rounded output makes it non-obvious.

The only way to make Luau consistent wrt number handling is to leave the rounding to the developer - which is context-specific and string.format is easy to use for that purpose.


While I kind of see the reason behind it, tostring was always meant as a quick no fuss solution to turning numbers into strings. If a number is close enough to an integer where the difference is negligible, then most people wouldn’t want 10 decimals filling up their output. If you really wanted this precision, string.format always existed.


Really happy to see this change, it’s about time that tostring(number) show what’s actually being stored in the number variable, rather than having to write our percise tostring alternative. Formatting less digits is super easy, formatting percise minimilistic output by default is nice.

I hate when floating point error is obscured from output making output tell one story but the underlying values tell another. A small change but very welcomed.


This is a great change. Relying on the previous tostring output to provide a specific number of decimals was always a hack to begin with.


It makes logical sense now. The output used to print 0.1 + 0.2 == 0.3, and doing a 0.1 + 0.2 == 0.3 comparison would return false. But now, it prints 0.1 + 0.2 as

print(0.1 + 0.2)
1 Like

I agree. They should do that. As my game uses alot of to string and would take me 1 hour to re write my code.

1 Like

Was this algorithm designed in-house or was it implemented from a publicly-available paper on the matter? Since the source for Luau was released on GitHub, I could very well see the new behaviour for tostring once it becomes standard.

1 Like

Are you able to provide backing there or are you going to provide another bad example of it not working?

This update is good for lots of people, and bad for lots of others. It’s not helpful to just be negative about it without giving a solution. What do you think they should do about it? Would you prefer a new tostring function that accurately displays decimals? Let’s get some suggestions flowing.

1 Like

I saw someone unaffiliated with Roblox say that Schubfach was the new algorithm, which could be wrong.


That’s literally a tweet from @zeuxcg but ok :skull:


This is a good change, I think the reasoning behind it is sound, and relying upon tostring to correctly output numbers now will be very good.