Think of remotes like the Transport layer on OSI model. 7-layer model/theory that explains how the internet actually works. There are two protocols, you should care about UDP (User Datagram Protocol), and TCP (Transmission Control Protocol).
TCP is reliable, making it so the data is guaranteed to get the destination, but does that you should use it? TCP is great for transferring data, but it involves a lot of math and has to maintain the TCP 3-way handshake (beyond the scope of this message, all you know is that the destination is expecting this message). Keeps track of sequence,
UDP is Best Effort, meaning it will send the message, but doesn’t mean it will get there in the right sequence or destination won’t know they will get a message, ahead of time. Think of best effort like those random advertisements in the mail, that we all throw out. You don’t know they are coming, and you could get one late, and another too early. (No, I am not saying the postal service is unreliable)
UDP is commonly used in video. If you miss a frame, then your eyes probably won’t notice it.
Long story, short for event you might be sending hundred times per second/minute, try UDP, unreliable remote events.
From personally experience, there is no difference, but to prevent spaghetti code, try one per movement/ability.
Unless you have the ability to handle networking infrastructure and control strength of people’s 802.11 (Wi-Fi aka Wireless Fidelity), there no way to control latency as the time between source and destination. Wired is always faster than wireless, unless you with IBM “HoneyOptic” project, (I could’ve got the name wrong)
Just like drinking too much water, cause drowning. Too much oxygen, can cause oxygen poisoning. (probably better term for it) Too much of anything will cause problems, best way to combat it is use them wisely, and condense, when possible.
My suggestion, use Maid to help combat memory usage, and a networking package like Warp to help networking.