How to define number types explicitly OR efficiently simulate integer overflow accurately?

I know that I can simulate integer overflow with this:

function(x)
	if x > 9223372036854775807 then
		x = (x % 18446744073709551616) - 18446744073709551616
	elseif x < -9223372036854775808 then
		x = (x % 18446744073709551616) + 18446744073709551616
	end
	return x
end

Yet, due to the fact that Lua automatically uses double, the result of this is rather inaccurate (it is pretty critical for it to be accurate in my use case). I tried using IntValue but it seems to be using double as well, just rounded. From my previous tests with numbers in Luau, I find that it is possible for numbers to be 64bit-signed integers (every operation on the value were with integers and it actually overflows from 9223372036854775807 to -9223372036854775808).

The number that will be passed to the function looks like this:

function(x)
  for i = 2047,0,-1 do
    x = x * y + z -- y is a very big number thats close to the 64-bit integer positive limit, and z is similar to z but smaller.
    do_stuff(x)
  end
end

And by passing any number, x seems to be casted to double (or that it already is a double before passing).
As you might have noticed, there is a lot of iterations on x, so I can only afford up to, say O(log n) for time complexity, if I were to simulate integer overflow.
Maybe I overlooked something, any solution is welcome.

1 Like

Luau’s numeric type implements the IEEE 754 standard:


The matissa has 52 bits (2^52 integers) and can use them to represent integers without losing precision. In fact, since the matissa is normalized with implicit bit, it can use integers up to 2^53 (including positive 2^53). Here is the more detailed explanation.

2 Likes