 # Mathematics with decimals not entirely accurate

Reproduction Steps
System Information: N/A
Reproduction Files: N/A

Expected Behavior
When using decimals in mathematic equations such as x * 0.1 or x * 1.1 on whole numbers I’d expect the engine to react to it accordingly and provide me with numbers the exact same as x / 10 or x / 11 as these results are to be the exact same logically.

Actual Behavior
Instead of getting the desired result on said equation I’m getting results with a ton of unwanted numbers after the desired result.

Workaround
Yes there’s a work around, but this is in the bigger picture too much to work with as it’d require me to multiply said decimal by a number that matches the desired result as in with x / y instead of x * y

This should not be a required solution to get rid of unnecessary decimals nor is this a viable solution for numbers that are lower than .1 or higher than 1 with decimals (e.g. inconsistent number usages in said equation)

Issue Area: Engine
Issue Type: Other
Impact: Moderate
Frequency: Constantly
Date First Experienced: 2022-04-29 18:04:00 (+02:00)

1 Like

The core issue is that 0.1 doesn’t have an exact representation in floating point. Computers needs to use a finite number of bits to represent an infinite number of numbers. sadly, the common representation doesn’t have some commonly used numbers like 0.1. In places where you see 0.1 printed, it is because some algorithm has determined that parsing 0.1 would result in the same binary value as whatever the slightly different value would be. And that humans like seeing 0.1 over the longer but actual version.

I’m not sure why division seems to give slightly better results than multiplication. that is odd. But the core issue still is the limited way computers commonly represent numbers.

0.1 + 0.1 + 0.1 != 0.3 is another fun one. and there are some classic examples like “while x != 1.0 x = x + 0.1”.

–edit: i/10 gives the better results because the value of i and the value of 10 are exact (at least for non-huge values of i) and the operation normally gives something as close to the correct value as possible. 0.1 * i doesn’t because 0.1 isn’t exact and results in a value that is increasingly further from the correct value. not by much, but by enough to throw off the print statement.

5 Likes

Remember that computer stores float numbers in zero’s and one’s, so it’s hard to represent decimal number perfectly. Especially when you have only 32 bytes (in lua). That is not enough to store perfectly decimal number.

If you’d like to see it’s full representation you’d have to give it much memory. (You can try to convert here 0.1 to binary system.

Why do you get better results when you divide by 10? It’s simple. 10 in binary is represented as 00001010, you can store it on a single byte (we can’t say that about 0.1) and dividing by this one is easier then multiplying by 32-bit float

Hope I helped! 1 Like

There are quite a few posts about this topic already where the description of what you are experiencing has been explained as well.

1 Like

Sadly this also throws off more than just the print statement in my case as i need to note down the number returned from a snapping functionality which, as you might guess, aren’t snapped to a certain decimal at that point.

I’m currently trying to figure out an equation that’s the same exact as i*decimalNumber but in division form which proves a small challenge on the initial bit. The main reason for me creating this bug report is that while I know floating point errors exist multiple applications that i frequent have a fail-save in place to prevent it from showing (the default windows calculator being the quickest example) and that I considered that such a catch mechanism would be beneficial rather than needing to manually find a solution on it that would basically be cutting off anything past the desired length in string form.

Edit: ended up using string.format("%[decimal]f", number) and turning it back into a number, seems like the quickest and easiest way

Typically the answer is to store grid cell based positions as your source of truth in cases where you’re trying to work precisely with a grid rather than world space positions. That is, keep an attribute reference or script reference to the object being at grid cell (4, 6) rather than just using the fact that it’s positioned at (0.4, 0.6) or whatever.

The other option is for you to re-snap the objects to grid coordinates before you do some grid based operation on them: `cellX = math.floor(positionX / gridSize + 0.5)`

1 Like

This is intentional. And accurate. Read more here.

If you still want the old behaviour(not really), use this resource (apologies for any self promotion)

u can also do `string.format("%.14g", 0.2 + 0.1)` and get the old behavior.