Integer limit faulty

So I printed:


And 2^64 printed a number, when the integer limit is 2 ^ 63.

So I have 2 questions:
What is the integer limit?
How do I check if a number reached the integer limit?

The Max Integer limit is roughly 2^52

if number >= 2^52 then


But when I print 2 ^ 53 it still prints the number?

checking again, it appears you can only count up to 2^63


Which is: 9223372036854776000

if number >= 2^63 then
 -- code

When I print:

2 ^ 100 -- It prints 1.2676506002282294e+30

But when I print:

2 ^ 64 -- It prints 18446744073709552000

Since both of these are over the integer limit shouldn’t 2 ^ 64 not print the actual integer as well?

See how it has an ‘e’ in that number? That number is actually 1.2676506002282294 times 10 to the 30th power.

After testing, it appears that there is a limit at around 2^1100 in which the number becomes inf

Edit: It appears anything above 2^1023 is the limit

So this is my layout:

Currency = {Value = 0}

This is a dictionary and I want to know that when the value reaches the max integer limit of a dictionary, a function gets called.

Well they can because my currency system is exponential

Is there something I can do to check if the integer limit is reached like this:

if currencyValue >= math.huge then

You could, but the thing is, like @TestAccount563344 said it is unlikely for anybody to reach that high of a number, depending on your game, it could take days, months, or even years to reach that kind of number, (or one second if an exploiter shows up, lol)

1 Like

Lol! I guess I’ll check if the value is equal to math.huge.

Strange, Quintillion is roughly 1e18, 18 zero’s, it appears you can do any number up to 1e309 before it becomes inf

Random thing:

function CheckInf(x: number)
	if x >= math.huge then
		return (1e308) -- in other words, REALLY BIG NUMBER


Lua numbers are represented using a floating-point format (double-precision based on IEEE 754 standard), where a number is represented using 64 bits (1 representing sign, 11 for the exponent and 52 for fraction, as seen here).

So, the 52 bits are used as the fraction in order to multiply it 2 ^ exponent times. Means you can represent exactly represented integers using that to 2 ^ 52 * 2 = 2 ^ 53 (9,007,199,254,740,992), both negative and positive. When that limit is reached, the last bit is lost which means you can only represent even numbers, and it continues losing precision happing each power of 2 (more info.)

And if you are talking about representable numbers, since you have 11 bits for exponent you have 2048, divided by 2 (for negative and positive) you can represent maximum exponent to 1024. But because exponents -1023 and 1024 have special meanings (subnormal or “strange” numbers respectively), that sequence is used to represent inf or nan. So the previous exp. 1023 multiplied with a fraction 1111111... in binary is the maximum double (its value is 1.7976931348623157e+308, same applies for negative ones).

You can read that Wikipedia article for more information. If you need a representation of bigger numbers you can look for a BigNums library.
Hope this helped.