Questions about Networking Libraries such as Warp, FastNet2, Etc

Hi.

So I’ve been doing some research on libraries like FastNet2, ByteNet, Warp, etc. and they all seem incredibly useful with bringing down network usage in your game. I have read the documentation for a lot of these, however im still a little confused on how these work and how they should be intentionally used.

For my current game, im pretty much entirely using remotes to send messages between the server and client. So far I haven’t been getting lag issues or any bugs, However these modules intrigue me because they seem to provide a system that may work “better” than remote events(??).However, I was wondering a few questions about these network libraries before adding them to my game, hopefully they can be answered for someone with a pea brain like mine.

  1. How should these be intentionally used? Do these modules completely negate the use of roblox’s built in remote handler/ remote function systems, or will I have to find a balance?

  2. How secure are these libraries against exploiters? Can exploiters manipulate the functionality of them to their advantage?

  3. Which one is the most efficient/least expensive on a persons preformance? I remember seeing some benchmarks on how Warp could be the best, but I was wondering which one should I really be using for my own personal needs?

If I have anymore questions, I’ll likely just add more to either replies or i’ll edit this post. I hope this doesnt inconvinence anyone to answer my questions.

5 Likes

They should be used in place of normal remote events and functions (note that some libraries don’t support remote functions).

No, they put a layer of abstraction on the Roblox remote events and networking systems to make them easier to work with.

They work by adding remote event invocations to a queue that gets emptied every PostSimulation (Heartbeat) event; the accumulated data is then sent all at once through a single reliable or unreliable remote event. To differentiate events in the data, they assign a unique ID to them (which are shared to the client; I believe some of them create a folder with attributes for this) which then also needs to be sent.

local ShootEvent = DeclareEvent("Shoot")
-- Shoot will receive an ID of, let's say, "0" (string)

local ReloadEvent = DeclareEvent("Reload")
-- Reload will receive an ID of "1"

ShootEvent:FireClient(player, data)
--[[
	In the player's queue now:
	
	{
		["0"] = contents of data,
	}
]]

ReloadEvent:FireClient(player, otherData)
--[[
	In the player's queue now:
	
	{
		["0"] = contents of data,
		["1"] = contents of otherData
	}
]]

When the recipient receives the table, they read the IDs for each set of arguments and will know which event they are for; thus, they can call all related event handlers.

The way they “work better than remote events” is saving bandwidth (how much data is transferred, in bytes) via the queuing functionality. Say I have two different remote events “A” and “B” and fire them on the same frame. Each remote event sends their respective data but also a small overhead of 9 bytes. That’s, in total, 18 bytes plus the data you used. With the networking library, events “A” and “B” are grouped into a single remote event call, reducing the overhead from 18 bytes to 9 bytes. Now, there’s still going to be some extra data for the IDs, but the ID overhead is still less than those 9 bytes.

Specifically, assuming that the events are indexed “0” and “1,” the ID overhead will be 6 bytes (3 bytes for each string), so you’ll be saving 3 bytes. The improvements scale more dramatically when firing, say, 100 events in the same frame.

Yes, they will be able to read the arguments passed because the libraries still use the Roblox remote event under the hood. It’ll be more difficult with ByteNet, though, because everything gets encoded in a buffer, but given enough time, they’ll figure everything out. Exploiters can also manually fire them and pass in whatever arguments they’d like.

The most performant library is probably ByteNet. The source code is littered with micro-optimizations and takes full advantage of native code generation, so it runs extremely quickly. It’s also the best for reducing bandwidth because it makes full use of buffers (will take some time to learn and get used to as you need to specify argument types, so make sure to really read the documentation). If you want to use something less foreign, the other networking libraries are fine.

10 Likes

Corrupted has done a great job at answering this, so I’ll just add a quick word on usage and premature optimisation.

If you game isn’t experiencing lag or stuttering, and each player’s network receive (check performance stats in-game) is typically quite far below 50KB/s, these modules might not actually benefit you. As Corrupted pointed out, these modules take time before and after firing the event to modify the data package, compress multiple events into one, replace your data and add extra data of their own. In cases where they use buffers, that’s extra time serialising the data into bits and bytes too. Despite the marketing of these being really fast modules (for the record, they are. Considering what they do and the lengths their authors go to to ensure theyre as optimised as possible and their overhead is as small as possible, calling them fast is an accurate statement), from call to function theyre all slower than directly firing an event. You’re sacrificing speed to maximise data throughput.

If you’re only using a few remotes, you aren’t firing them very regularly, and they dont hold a lot of data? These modules could initially seem to harm network performance because they’re adding extra data overhead, they’re taking extra time, and their data compression doesn’t have the raw data to efficiently compress anything. Their overhead could outweigh their optimisations.

But if you’re firing a couple of events every single frame? A couple of clumped large packets, or lots of smaller packets of data? Potentially lots of repeated data? Take a weapon system with automatic guns for example, or character/NPC replication for example. That’s where these modules really shine. They are an incredible asset that can absolutely improve your data throughput in these situations. Some, if not all of them, even have facilities catering to UnreliableRemoteEvent usage and can use time stamping to ensure basic ordering, so they can really help you maximise your network throughput.

I’d suggest trying to not use a mix of networking modules and vanilla event handling, purely because they more data that gets sent through one of these modules the better they’ll perform. If you already have a couple of events firing every frame using a network module, and one that fires every time a player joins, you’d be best just putting the player joined event through the module too. The extra data just gives the modules more chance to further optimise.

TL;DR: Networking modules have their use cases. Don’t use them unless their optimisations can actually outweigh their costs. Lots of data sent in quick succession? These modules will work like a champ and maximise throughput. Small amounts of data sent in a longer timeframe? There’s only so much these modules can do, and that’s not what these modules are designed to do. You’d probably be better off with vanilla event handling.

5 Likes

Well, In my case, I have remotes that are firing up to every .025 seconds (for a minigun weapon that has a buff) automatically. If I want to make the most out of these libraries, Would I have to put practically ALL of my events through these libraries to make it more functional? I don’t know if I want to do that, because some events can only fire up to every 3 minutes or so.

Thanks though for your input.

This was actually a very useful resource reading all of this! Thank you!!!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.