Hi, I am working on a data system between server and client for an inventory system and I came across the limitation of RemoteEvent/Function, since it is important for me to sync the server and client’s inventory whenever it changes, I need to get the size of the data(table) to optimize the amount of data I need to send.
tl;dr
Don’t want to exceed bandwidth limits, need data size for optimization purposes.
If you want the size of a table with an index, put a # in front of it. local count = #table
If you have a dictionary, just iterate through the table and count 1 for each value.
With that said, your inventory system has to be really inefficient for you to start worrying about bandwidth. Do yourself a favor and don’t worry about it until errors start popping up or your ping starts noticibly spiking.
I’m not sure but I think that while data is serialized in JSON before sent, it is also compressed - which is logical given that we want to stress the network as little as possible. So, it is possible that the value we get isn’t the same of the actual one.
Possibly, but it’s still a good indicator of how much data you are sending. I don’t know the limits for sure, but I know 200k is the limit for scripts(unless created by hand). That might be the limit here also. Either way, that’s a lot to be passing around.
I’m pretty sure that more than just JSON is used for data sent over remotes. You can send stuff like Vector3s, CFrames, or Color3s, which JSONEncode does not support.
It’s probably possible to get the size of the data you’re sending, but that sounds unnecessary to me. If you know the data will be too big, you can split it into “pages” or “chunks”. If you don’t know if it will be too big, you can leave it unoptimized and only optimize it if you see a problem. It’s not reasonable to pre-optimize everything. You also only need to pursue the highly-optimized fit-as-much-data-as-you-can-into-each-request strategy if the split-into-arbitrary-pages strategy does not work for you.
Are you trying to send the whole entire inventory between server/client each time? You should be able to update the inventory by only sending changes to the inventory. You can allow the client to request the whole state of the inventory if it ever gets confused, but you should be able to program it such that the client always knows if an inventory operation succeeded or failed and can update the inventory accordingly without having to receive the whole inventory again.
This. Whenever you are trying to keep a large collection of data in sync across multiple copies over the internet, this is the first optimization you do (and often the only one that really matters). Implemented correctly, it should not be possible for the client copy to get out of sync except in the case of network problems (connection interrupted, lost packets, etc.). In these cases, where updates were actually lost, you can do the full re-send. If things are getting out of sync without network issues, there is probably just a bug. Sending everything “just in case” is not a great solution for this case.
Note also that you should have some strategy for verifying that client copies are in sync with the server, without having to request the full data to compare. There are lots of ways to detect things have gone wrong, including having the server sequentially number changes to the dataset (so that client knows if it missed an update), or computing some sort of checksum for the server data to send with each update, which the client then also computes and compares.