It isn’t, but it’s a tradeoff: You’re getting more range of representable numbers in exchange for being able to exactly represent fewer human readable numbers. For better or worse that’s the tradeoff that the programming profession as a whole has chosen to make in most cases.
There are domains where you take a different approach. For instance, any system working with money / currencies that never wants to “lose track” of any money typically uses a fixed-point representation, where you you basically store the number multiplied by 10000 to give it exactly 4 decimal places worth of precision.
However if you use such a fixed point representation, then the largest number you can store also becomes 4 digits shorter, and you can’t represent very small or very large numbers in a fixed amount of space at all. That’s fine in a monetary context where only so much money exists… but in a game engine sometimes someone wants to put a planet object out at 1000000 studs or micro-adjust the size of their object by 0.00001 studs to make it fit somewhere and expects it to work reasonably well and performantly.