ByteNet | Advanced networking library w/ buffer serialization, strict Luau, absurd optimization, and rbxts support. | 0.4.3

Thanks for your reply.
I’m not sure why regular Roblox events seemingly don’t have the Id overhead? Do they use another method of identifying where to send through? Could this potentially be optimized for this module?

I’ll probably stick to the regular remote for this specific usecase. I can’t lose much accuracy, so there’s not much more room for compression.

For another function, I used 3 uint8’s to represent orientations (~1.4° accuracy) rather than a vector3 and that cut network usage in half.

It’s my understanding that Roblox also batches packets that have been sent very close to each other, and sends them all at once. Why does this not go over 900B? They probably have an internal splitting function if packets become too large?

I believe this behavior only occurs with reliable remote events.

ByteNet requires the ID overhead because it only uses a single reliable and unreliable remote event; you wouldn’t be able to identify which packet is which otherwise. Not entirely sure how Roblox handles this, but the remote event instances probably also act as IDs somehow.

1 Like

How to define a mixed table in ByteNet
[14:52]
such as: local a={UserId=123,Name=“Jack”,Position=Vector.new(0,0,0))}

Do you know if there’s a specific reason why only one of each remote type is used?

Using multiple remotes seems to be the better solution imo. I don’t know the advantages of sending everything through one remote, other than being somewhat easier to code maybe.

You have ‘special’ types: Specials - ByteNet
Map is the closest to what you want, but only allows fixed type key/value pairs. I’m pretty sure this is by design.

With mixed type tables you can’t know how many bytes each element takes. This would need to be stored somewhere, meaning extra overhead per value.

I think the best solution would be to separate the mixed table into multiple tables of fixed types. Just make sure the indices of the tables match, and you’ll be able to recombine them later.

Would it be possible to use something instead of attributes for packetIds? The 100 character limit for attributes gets kind of annoying when you have many fields in your packet objects.

Also it seems like there might be a bug in how CFrames are encoded…


My usecase is replicating a player’s look angle, normally tiny deviations wouldn’t matter but in this case the player just looks a completely different way.

There seems to be a few issues with floats in general, trying to set a field type to float64 gives me a buffer access out of bounds error, even if the value is clearly a float64, this causes issues with values like timestamps, which need to be correct to at least the second decimal place.

Saving a unix timestamp to a float32 gives me an incomplete value, so not being able to use float64’s is a pain, unless I’m doing something wrong I’m assuming that’s a bug.

2 Likes

The advantage of using a single remote is you can merge separate remote invocations in the same frame together into one single remote invocation, saving the 9 (server to client) or 14 (client to server) byte overhead that comes with each reliable remote event invocation.

So if I were to fire a remote event 200 times from the server on the same frame, I would be sending 1800 bytes (of pure overhead) in addition to all the other data. Using ByteNet, these 200 events would be grouped into a single one, resulting in only a 9 byte overhead and 200 bytes for the IDs (almost 90% reduction).

Of course, 200 bytes of overhead is still 200 bytes of overhead. I do see why using a second remote event would help in this case, for events that are fired extremely frequently. This way, the 200 sets of data can be queued and sent separately (onto the second remote event, and also read separately as well) from the primary queue, reducing the overhead from 200 to 9.

If you’re not firing remote events very frequently, ByteNet (and the other networking libraries such as BridgeNet or Warp, which work the same way) won’t save you much at all; in fact, it’ll cost you extra because it needs to write extra information to identify each outgoing packet.

Using a single remote is probably completely unnecessary for unreliable events, as they do not have the 9 byte overhead.

1 Like

Thanks for your reply.

I’m writing it 10 times a second though? I’m not sure what you mean with this. As far as I can see speed shouldn’t matter?

I didn’t know that UnreliableRemoteEvents didn’t have the 9 byte overhead. So using multiple remotes for unreliable only would save 2 bytes per remote call then? If yes then perhaps it would be good to add it as a feature to the module?

2 bytes isn’t that much though so might not be worth it.

What I meant by “costing you extra” is the size of the data sent. For example, consider the following:

reliableRemote:FireAllClients(workspace:GetServerTimeNow())

The above call takes 9 bytes (overhead) + 9 bytes (f64) = 18 bytes. If you make the same call using ByteNet, it will take 9 bytes + 12 bytes (size 9 buffer: 1 byte ID, 8 bytes f64) = 21 bytes. The bytes used to send a buffer vary based on its length; you can experiment using the following (in a module script):

local UnreliableRemoteEvent = PATH_TO_REMOTE

local BIG_STRING = string.rep("i", 905)
return function(...)
	UnreliableRemoteEvent:FireAllClients(BIG_STRING, ...)
end

So the converted data in this case is slightly larger. The difference is negligible, but it’s there, and that was what I was trying to point out.

If you used a second unreliable event, you would not need to write a packet ID into your buffer.

You would also not need to write any more data about the total length of the serialized data (unlike ByteNet arrays and maps, which require 2 bytes to write the number of elements, or VLQ), as you would simply read the entire buffer.

Overall, as you said, these reductions are pretty negligible.

However, you still should be merging buffers when possible instead of sending them independently.

Merging buffers of the same structure is beneficial because Roblox sends extra bytes depending on the size of the buffer being sent, and these extra bytes are especially noticeable for smaller size buffers. Merging into one large buffer would cut down on this compared to sending several smaller buffers individually, as Roblox compresses large buffers when it can over the network by default (since compression works best on data with repeating structures, you’ll see some nice reductions most of the time).

Example:

-- 40 size 9 buffers merged into one
local x = buffer.create(9 * 40)
for i = 0, 39 do
	local b = buffer.create(9)
	buffer.writef64(b, 0, os.clock() + math.random(-1e5, 1e5))
	buffer.writeu8(b, 8, math.random(0, 250))
	
	buffer.copy(x, i * 9, b)
end

payloadSize(x) -- The module script function

-- Result: ~360 bytes (varies, but not by much)

-- 40 separate size 9 buffers
for i = 0, 39 do
	local b = buffer.create(9)
	buffer.writef64(b, 0, os.clock() + math.random(-1e5, 1e5))
	buffer.writeu8(b, 8, math.random(0, 250))

	payloadSize(b)
end

-- Result: 480 bytes (40 * 12 bytes each)
2 Likes

Will look into it and fix, thank you for reporting

3 Likes

Do you think it would be possible to add a feature that automatically enforces the 900B (soon 905B!?) limit for UnreliableRemoteEvent packets?

I think it can be bypassed by splitting the data across multiple FireClient() calls. I went through the code briefly and I see a ‘bufferWriter’ module. Not sure how to adapt it though.

My usecase is the same as above. The following call
ByteNetPackets.NPC_ControllerPos:sendTo({ id = self._id, pos = self._currentCFrame.Position }, player)
is made 100-200 times in quick succession. The module then packs them all into a single packet, but that packet can be more than 900B. If it is more than 900B, I think it should be split into multiple packets. Like normal remote event calls work.

Thanks

Automatically splitting into multiple packets wouldn’t work, because unreliable remotes do not guarantee that data will arrive in order or whether they will arrive at all.

One could build on top of it logic to manage order and arrival, but at that point you’ve recreated regular RemoteEvent.

You should send important, larger data over regular RemoteEvents and use unreliable only for small packets that can get lost and arrive in undetermined order.

I’m not sure what you mean? The order of packets doesn’t matter for my use case (many individual calls).

In that case you can easily split the packets by 900 bytes yourself. Doing this automatically in general purpose networking library will only confuse the library users when the sent data comes all mangled and corrupted on other side.

Another problem with automatic packet splitting is that you do not control where the splits occur, so it might split between values that must be transfered in one atomic packet and in the end one lost packet makes both of them trashable.

The issue is that internally bytenet packages them all together if they’re being sent at the same time.

I don’t see an issue with an unpredictable split. They’re separate packets, splitting or cominbining them shouldn’t change a thing.

How difficult would it be to convert all existing remote events and functions in my game to ByteNet functions? Im just learning about some open source modules like this and i thought it would help my games performance, but im not sure if I could just easily replace all my remotes or if this is something I need to use when starting my project

1 Like

I plan on adding fragmenting automatically

There will be a lot of updates to ByteNet over the next few days

2 Likes

When we need to send data every frame?

1 Like

in games there is data that constantly changes, for those cases we send it every frame and compress it, an example can be player inputs, these are changing all the time so we need to send them to the server constantly, another example are player positions from the server’s worldstate, these are always changing so we send em out to clients every tick.

If you have seen those scripts where your character’s head points out to where you look at, they need to send every frame the client’s camera angle to the server.

If you have data that only needs to be sent once as a signal, use a reliable remote event as always, they are designed for that and perfect for that, an example can be a simple input like activated or deactivated, sending a notification to a client, etc.

2 Likes