Yes! There are so many applications for string.pack!
I’m working on a custom system for working with massive worlds with 100k+ models. It encodes chunks of map data into binary strings (escaped for StringValues), then decodes and displays only nearby chunks. I’ve been indecisive about the float format and ended up going with this because it’s fast to encode/decode in Lua: string.char(exp, sign + mantissa2, mantissa1, mantissa0)
Encoding/decoding cframes is a bit ridiculous, it’s a relief to not need this:
I’m actually finding it a bit hard to work with in terms of reading. It’s probably fine for set formats, but for stuff that’s arbitrary length (the example I tried was LZ4 compression since that seems like a practical use) it’s a bit awkward.
Incidentally if you’re strapped for space, a lot of CFrames can have their sizes cut down dramatically – Roblox’s binary format just stores an ID for the various axis-aligned rotations and uses that where possible instead of storing the full rotation matrix. It cuts most CFrames down to being a mere 13 bytes instead of 48, though it comes at a cost of non-axis aligned ones being 49 bytes.
Not sure if that would end up being fast but I imagine it would be more space efficient.
These functions work best when the layout is fixed size. LZ4 would not be a good fit, but packing message data for network replication or serialization when you know the exact shape of the message would be.
For client-facing data I use smallest-three quaternion encoding, along with storing ids for the 24 axis aligned angles, which uses less than 10% of the space when storing cframe lists in memory as strings for my data! I decided not to compress in-studio cframe data though, because I’d prefer to keep developer work at high precision; Ideally what the map artist creates would look identical to what the player will see in terms of precision, but I may decide to change the compression down the line. Packing many objects into StringValues instead of parts for representing model positions already shaves 2mb off of my save file. I got undo to work great with StringValues, but team create might not play nicely if two developers edit different objects within the same StringValue chunk. There are ways to make it work, but I don’t really use team create so it’s not a problem.
Is there any chance we will start to see API’s become more binary-string-friendly? It would be great to be able to store compact data in DataStores and send more bits via MessagingService without needing to convert to base 64. I’m currently using this to escape the least-used byte to enable me to store binary strings in StringValues, and I don’t like it:
It’s important for the DataStore API to be robust. Back in 2015 I went with base 64 encoded strings for my game because of undocumented cases that would error.
string.format("%q", binaryString) is also pretty much unusable because it doesn’t properly escape newlines.
I don’t have a repro, but I encountered the most confusing bug a few months ago when I stored invalid utf-8 escaped strings in script source. The game would work because the module’s source was set from a script, but the moment it was opened in the script editor the source would actually change slightly and mess up my binary data. Because of this, string.format("%q", binaryString) should ideally take utf-8 into account instead of always allowing characters 128-255.
I would also request that escaped characters be made more compact: “\000\001\002\003999” → “\0\1\2\003999”. I have an entire module dedicated to formatting strings this way. +1 if it uses apostrophes when it finds there are more quotes to be escaped.
We probably do need to fix quoting of newlines (although that quoting style is compatible with Lua’s string literal syntax so it’s technically correct), but either way this is going to be grossly inefficient compared to either dedicated binary support, or a specialized binary storage format.
Also you really don’t need to unroll loops by that amount
Yeah it pushes close to the 200 local limit by unpacking 192 bytes at a time. The performance boost from including the 192 loop was roughly 10% when I measured it last year. I really needed this for my packed-map-data use case, where long strings can be packed extremely often when dragging models. I plan on making my own tools for foliage/models to reduce the need for redundant string updates.
Ran the benchmark again with the new VM and the performance difference is only 2%
for i=1, #data, 8 do
local a, b, c, d, e, f, g, h = string.byte(data, i, i + 7)
freq[1 + (a or 256)] += 1
freq[1 + (b or 256)] += 1
freq[1 + (c or 256)] += 1
freq[1 + (d or 256)] += 1
freq[1 + (e or 256)] += 1
freq[1 + (f or 256)] += 1
freq[1 + (g or 256)] += 1
freq[1 + (h or 256)] += 1
end
Although using two loops to avoid or might be slightly worthwhile.
do
local z = 1 + (a or 256)
freq[z] = freq[z] + 1
end
I wrote a script to convert my entire codebase to the new compound operators. The script is <100 lines but depends on a lot of other modules that do tokenization / lexing so it’s operator-priority-safe, but isn’t easy to share. It found 519 scripts and removed 22kb of source.
With batches of 64 the performance is pretty much identical. I had decided to skip \0 because it’s what we’re trying to escape and usually has a high frequency in uncompressed data.
It actually concatenates in batches of 64 because it takes over 2x longer to add to add each character to an array #data long and concatenate it (even when the table is reused.)
It’s interesting that a + 1 takes 0.95x the time 1 + a does (not controlling for everything else that’s being done.) In most languages it’s just preference but here we get slightly different instructions.
Perhaps someday we’ll get an instruction that takes the add out of a[b + 1] = c. The addition probably isn’t much compared to converting from a double though
I’m just hoping that attributes will support \0 so I don’t need to use this at all. BinaryStringValue also seems like it would be a good alternative if it’s value was exposed.
Ah, yes, +1 instead of 1+ is a good idea. We currently don’t automatically reorder this because if the right hand side has an __add metamethod that is implementing a non-commutative operation, the order becomes significant (although whether __add should be allowed to be non-commutative for numbers is an open question…)
It actually concatenates in batches of 64 because it takes over 2x longer to add to add each character to an array #data long and concatenate it (even when the table is reused.)
Yeah, concats need to be batched for optimal performance. This can likely be faster for large sequences if implemented via table.create and table.concat. I’ve been thinking about a buffer data type that allows to efficiently build up large strings (similar to StringBuilder in Java/C#), which would address this problem more cleanly (this data type is kinda necessary in some internal functions that might need to be optimized in the future).
But yeah ideally we need to support binary data better in various places. The issue with DataStores is that it uses JSON for actual data storage, which isn’t very friendly to binary data.
LJ solves the ordering problem I think by just having verbose variations of the instructions. ADDVV, ADDVN and ADDNV for example. It’s a bit more bloat-y in terms of instruction set size but it fixes the issue.
I know Instruction design is a bit of an art, it’s a judgment call as to whether a particular instruction helps noticeably. We usually prioritize based on performance of code we see often and it’s comparatively rare to see this being important (there are some other instructions we are likely to prioritize before these ones).
That would be great to see! Some way to add string.pack’s result to a buffer without creating the string would be really useful too.
To build long data strings I usually add bytes to an array, then use string.char(unpack(array, i, j)) in batches of LUAI_MAXCSTACK - 3 (7997).
Support for combining buffers would be nice; For my game’s save system I often write binary data to bufferB, then store bufferB preceded by its data length in bufferA so I can potentially skip over that data without processing it when loading saves.
Would it be make sense to be able to mutate a byte at a previously reserved position in a buffer? A lot of data can be expressed compactly as discrete bytes, but it’s possible to save a lot of space by doing bit packing across different objects; An easy way to do this without compromising on byte/string performance is to allocate 1 byte when writing the first bool, then finalize & set that byte once 8 bits have been added before starting again. When deserializing it just needs to get one byte when reading the first bool, then extract the bits one at a time before starting again:
local bytes = {}
local bytesLen = 0
local writeByte = function(v)--$inline
bytesLen += 1
bytes[bytesLen] = v
end
local bitLevel = 128
local bitValue = 0
local bitBytePosition = 0 -- Position in 'bytes' where 'bitValue' will be stored
local writeBool = function(v)--$inline
if v then
bitValue += bitLevel
end
if bitLevel == 128 then -- First bit in byte
bytesLen += 1
bytes[bytesLen] = 0 -- Allocate
bitBytePosition = bytesLen -- We will set this once 8 bits are written, or when done serializing.
bitLevel = 64 -- Next less-significant bit
elseif bitLevel == 1 then -- Last bit in byte
print("Byte", bitValue)
bytes[bitBytePosition] = bitValue -- Set byte in buffer
-- Reset for next byte
bitLevel = 128
bitValue = 0
else
bitLevel /= 2 -- Next less-significant bit
end
end
do -- Write data
local rng = Random.new()
for i = 1, 256 do
local v = rng:NextNumber() < 0.5
print("Bit", i, v)
writeBool(v)
end
end
if bitLevel < 128 then
bytes[bitBytePosition] = bitValue -- Don't wan't to forget to add this byte.
end
local data = string.char(unpack(bytes, 1, bytesLen))
print(string.format("Result: %q", data))
Oh, I’ve always just used a string. JSON is harder to mess up and is great for simple readable data, but binary data is really necessary for huge games to scale up and support huge user creations (like houses) and highly persistent worlds (like quests).
A quick question as I’m curious, what was the primary reason for JSON as a storage format choice? Personally I tend to write my own formats when I store large data as it means I can optimize for space efficiency & computation time, but, JSON is really only useful to me when it’s either overkill or just plain easier.
I originally thought Roblox would reuse pieces from some existing formats like I’ve been seeing a lot, for example, I think the http cache uses its own format and stores header info and can store RBXM files. Obviously in that case no escaping is necessary in the cache file and I don’t think any is used and thus I don’t think that’s a great example, however, RBXM files additionally are decent at storing binary content. I’ve noticed an increasing amount of reusage of formats like RBXM that have been repurposed to contain certain alternative info (I think even recently at one point as far as to store precompiled core scripts showing that RBXMs actually have a storage type for luau bytecode too which I thought was weird).
But yeah, I have a suspicion that if datastores weren’t JSON they could be designed to store full bytes, special characters, and even entire RBXM files like the old Player:SaveInstance function (I miss him, he was so neat ) and thus instances at a near 1:1 (or less but that’d take additional work) ratio and probably more computationally efficiently than JSON so I’d think there’s likely an important technical reason JSON was chosen.
The compression algorithm I’ve still been slowly working towards some enormous optimizations on enormously suffers since I have a full 126 values I can’t use for datastores (and I have to account for JSON escaping which I’m too lazy to worry about yet so that reduces my space even less). This even effects things as far as actual computation time because I effectively double the amount of data past the decoding stage for compression, and decompression I likewise run into problems where it ends up effecting my pristine (iirc) 100x faster performance ratio.
Speaking of, the new string.pack and string.unpack functions are going to enormously improve performance in my code because the biggest problem I face is performantly converting 5-10 MB strings into bytes without exploding the game server.
I think this is what bothers me most. The storage endpoint must work with regular binary data. If SetAsync’s argument is a string then it could precede it with a single byte not used by JSON and send it on its way. Who wouldn’t want a more performant API that supports compact binary data for huge user creations, with a pretty nice increase in usable storage space. Not to mention reduced storage overheads because it’s easier for devs to make save data for highly persistent experiences compact.
JSON is easy to use for users, human-readable without effort, easy to version, and highly portable. Low commitment and easy to recover if they need to switch to a different back-end solution.
Totally makes sense as the default format IMO. Most developers only store a small blob of player data where it brings a lot of value to store it in a human-readable way with non-strict shapes.
I suppose it maybe makes sense on its own, but, I’d personally think that something better than JSON would have been used in the case of Roblox. None of the benefits of having JSON don’t really make sense to me in this case, because, we can never actually see or manipulate any of that JSON ourselves. I still really believe there is (or was) some technical reason, but, who knows.
I believe the reason why we use JSON as the DataStore format is because:
a) This is the transport format of choice for REST APIs, and we need to send the data through a web API
b) DynamoDB, which was (and is) hosting DataStore data, used to only support JSON well back in the day; I think they have options for binary storage now
So we didn’t pick this format specifically because of efficiency. We don’t use JSON in the engine and try to use binary formats in general when performance or memory is vital (see rbxm, shader packs, http cache, etc.), but here we were working with a system where JSON was a more natural fit, and data size issues only started surfacing way later. Worth noting is that it was just announced on RDC that DataStore limit is going to be 4 MB.