Update:
- Replaced counter loops with iterator loops where I could because of this optimization Luau makes:
- Made
SchemaData.fullinst()
accept a dictionary of SchemaData so that less table indexing is done
SchemaData.fullinst()
accept a dictionary of SchemaData so that less table indexing is doneThe cframe is still bugging out lemme record it real quick>
Edit:
Try constantly sending random cframes and compressing + decompressing them thorugh the module. You will see the issue quickly.
I am unable to record since i am currently on laptop but just try what i said and print out stuffs @athar_adv
erm, make it more frequient like 1/60s i am using module for head movement replication
I myself use this module for character replication, so I would know if it was buggy. Are you sure you don’t have an older version and/or it’s a problem with your replication?
no its not a problem with my replication; it literally happened when i was passing cframe to an object
Well I dunno, the module works fine. 1 thing I would check is if you are setting the cframe on the server and the client at the same time, as that would cause stuttering. (My character replication doesn’t set the cframe on the server for that reason :V)
Its not even stuttering the literal cframe goes to nil; stuttering is other issue
I wonder, can you send a test place for me to replicate and fix the issue?
SchemaData.cframe()
which if set to true will check for these special cases.Preventing too many unique id assignments in a single frame
SchemaData.normalNum(base: number, sizeType: SizeType?)
which will serialize numbers relative to base, meaning less space
SchemaData.obj()
which can serialize objects using compressed object format (however at the cost of only being able to store primitive types)
Would implementing this into my frameworks remote module be to any benefit?
Sure why not, it has quite a few useful types like unions, static types, full instances, instance references, bit arrays, etc that other buffer serdes don’t have
I was speaking more or less benefits towards optimization and networking.
Like serialization times? It is quite fast, I can serialize 1k+ instances in less than 0.009s ast time I measured, even faster for the more primitive datatypes
I’ve never messed around with buffers, so sorry if this is a seemingly obvious question, but how would i serialize instances with this? and can i serialize all instances? as this module would be great help for my upcoming plugin, and would make the instance serializing much faster.
Hi, yes you can serialize all instances fully if they can be reconstructed with Instance.new()
. If you wish to just serialize instance references you can do that too with SchemaData.instref
, heres an example of fully serializing a part vs just a reference of a part in workspace:
-- Whatever path you have, this is mine
local Converter = require(game.ReplicatedStorage.Modules.Utility.Converter)
local data = Converter.data
local full = data.fullinst("Part", {
["CFrame"] = data.cframe(),
["Size"] = data.vec3(),
["Color"] = data.color,
["Name"] = data.string("u8")
})
local ref = data.instref
do
print(workspace.Example.CFrame) -- 10, 20, 32, ...
-- Serializes Example as a full instance with the properties specified in the schema, much bigger and slower but fully reconstructs the entire instance upon deserializing
local buf, _, size = Converter.serialize(full, workspace.Example)
print(size) -- 40
local part = Converter.deserialize(full, buf)
print(part.Name) -- Example
print(part.CFrame) -- 10, 20, 32, ...
print(part.Parent) -- nil
end
do
-- Serializes Example as a reference, much smaller
local buf, _, size = Converter.serialize(ref, workspace.Example)
print(size) -- 5
local part = Converter.deserialize(ref, buf)
print(part.Parent) -- Workspace
print(part.Name) -- Example
end
Added a UniqueIdLength
attribute to the module for you to specify the length of all unique ids assigned to Instance
s for the purpose of instance reference serialization
SchemaData.instref
overhead by 1 byte, which is massive if you have alot of instance references (For example, 400 bytes for an array of 400 instance references instead of 500 if the unique id length is 4) Made it so that if more than 100 unique id assignments are made in less than 1/30s, the next assignments are deferred to the end of the frame. (Hopefully this should work), if you wish to modify the thresholds simply access Converter -> UniqueIds -> Server
Added the long awaited SchemaData.infer(argsList: {[string]: any})
which will infer datatypes preferring ones present within argsList
.
If the schema is a constructor function, the arguments passed to it will be determined by its entry in argsList
Example:
local schema = data.infer {
["number"] = "u8",
}
local buf, _, size = Converter.serialize(schema, 103)
-- 1 byte of tag overhead
print(size) -- 2
print(Converter.deserialize(schema, buf)) -- 103, true, 2
Compared to:
local schema = data.infer {
["number"] = "u32",
}
local buf, _, size = Converter.serialize(schema, 103)
-- 1 byte of tag overhead
print(size) -- 5
print(Converter.deserialize(schema, buf)) -- 103, true, 5
Added SchemaData.localnum(base: number, sizeType: SizeType?)
which is useful when your numbers are big but you know what range your numbers will be in, so this will use less space. Instead of serializing the full number, it serializes its difference from base
Added SchemaData.s_bitarray(bitLength: number)
which has a constant bit length. Useful if you don’t want to allocate much space for storing flags