I’m creating a network system and removing the requirement of Schemas via an id system using buffers. So I understand that every bit increases the cap of the number by 32 (1 bit = 32 max, 2 bit = 64 max etc.), but how do they work for float values and negatives? Do the same rules apply in things like writei8 (-128 → 127)? Or could a single bit go all the way down to -32? Also with float values, how do those apply in bits?
Or is all of this stuff inaccessible and I’m overthinking it?
Nope, they are all encoded very differently.
An unsigned 8-bit integer works very differently to signed 8-bit integer, for example.
And even more different to 16-Bit integer;
8 bits = 1 byte
It’s all interpretation, basically.
You tell the program how to interpret bits.
Bit ops like the bit32 library are a very complicated for beginners, so use buffers for now.
So for 3, 8 bit numbers, you need to allocate 8*3=24 bits, aka 24/8=3 bytes.
So if you, for example, interpret an unsigned 16-bit integer as a signed 16-bit integer, chances are that you will get some random number due to its bit pattern meaning an entirely different thing in this interpretation.
Please excuse me if I misinterpret this, but a bit represents a binary digit, base-two; it’s either 0 or 1. 0b10 is 2 in decimal, base-ten. A two-bit number can represent at most 0b11, 3. The maximum value you can represent is exponential, from the formula (2^b) - 1 where b is the amount of bits you have. Adding another bit, a three-bit number, would have a maximum value of 0b111, 7 (= (2^3) - 1).
Next, it should be known that the meaning of a bit is completely arbitrary. We just treat these things as base-two numbers because that’s what makes it easiest to think about. Everything in a computer is just numbers. You know, like how the number 65 means “A” in ASCII. Your CPU is literally seeing 0b01000001 whenever you write “A”, but it’s not a number. The same idea goes for floats: you’re not storing a literal integer number, you’re storing a binary representation of scientific notation (which, admittedly, is made up of smaller integers).
Wikipedia has various great images for visualizing floating-point encodings.
Here’s floats and doubles, but it’s not at all necessary to understand them (unless you want to know their limits and flaws).
You rarely work with individual bits unless you need to compactly store a single boolean value. The buffer library contains functions to read and write the binary (bit) representation of some data types in various sizes. These abstract the actual binary representations away and all you need to worry about is the maintaining the same structure of your data when decoding.