When it comes to debris and gibs E.g. npc ragdoll deadbodies etc…
I think a method to sync and desync model’s physics and properties (and descendants) from the server so that if the server destroys the instance it is not destroyed client side and physically stepped client sided too.
I know I could just clone the model client side when the npc dies, but there’s definitely a overhead. Not to mention that also removes the possibility to resync to server.
Concepts like client-side functions: Instance:DesyncFromServer() and Instance:SyncToServer().
Server-side functions: Instance:DesyncForClient(player: Player) and Instance:SyncForClient(player: Player)
Of course, once synced with server, server’s always the authority.
What’s your use case? This doesn’t seem very practical and there’s ragdoll methods that don’t require client desync, and it seems like it would not align with Roblox’s client server philosophy where it’s grounded in the real world.
There’s not much overhead for debris being created on the client of you’re optimizing it well, you just need to expire it or keep track of it and clean it up eventually. Desync seems like it could introduce memory leaks, especially from less experienced developers that don’t have experience to keep track of it. For example, your ragdoll would be destroyed on the server after being desynced, but how would you resync the already destroyed instance to get rid of it on the client?
More replication control in general would be nice. The biggest use case for me right now is disabling replication for serverside hitboxes. Currently, I have potentially thousands of entities running in my game at once with deterministic behaviour - the client merely plays animations and effects. I don’t need the serverside hitboxes to replicate at all, but they do and they consume a lot of network recieved.
To get around this, I have scrapped parenting the hitboxes to workspace entirely and have had to create custom collision detection, an octree, and a rudimentary assembly system. I’d much rather use Roblox’s built in collision detection (which is far more optimized as it runs directly on C++) and also avoid the headache of searching for bugs.
For now, you can place your server-side instances in a Camera. For some reason, a Camera does not replicate between the server-client boundary, but it still allow objects to be physically rendered.
There are some probable use cases for a manual server/client sync and desync request. It just boils down to (once again I’m requesting for another feature for more engine control) giving developer more control over what they can create. It’s less about how they use the tool and more about providing the tools.
In my long experience developing large moving creatures in my game. When it comes to server/client physics, Mover Constraints has to be destroyed on the client side for server side movement to be less jittery, I have some assumptions on why that happens but that’s not the point. I think this hacky solution could break if Roblox make some unexpected change as this might not be a officially supported design.
Another use case would be for hiding information from other clients other than the main client. I’ve requested something similar before because I had a cutscene with a server controlled npc per each client. This cutscene is independently played per client with server side scripts controlling path finding and shooting at things. These can’t be client sided because it does things requiring server authority. So I needed to destroy other client’s copy of their npc so that they aren’t rendered on the other clients. But it would still attempt to replicate data for movement and position even though the npc is unparented from their workspace.
If I had a desync method:
In my ragdoll use case, when an npc dies. It gets moved to another folder.
Server :Desync() the npc model and its descendants for all the clients.
Client enables ragdoll joints and constraints (since it’s desynced, each client naturally has independent network ownership of this model).
Server can cleanup the model without deleting the ones on the client side.
Client side scripts handle clean up.
If anyone had experience creating this in the current engine by cloning the npc model as soon as it dies, you’d find that if the player has a high ping, the server may have deleted the model before it could clone it. Another issue that affect the smoothness of transitioning from rigid to debris.
Of course, in the case where the server has already destroyed the object, resyncing would definitely not be possible. It just need to return the request result for the developers themselves to handle accordingly if the sync request failed. It would be as if the client spawned that object themselves.
That’s a rather neat and interesting hack. Though, for now is definetly the correct choice of words - I’d rather avoid using unintended behavior, as if it gets changed I have to re-implement my current system again later.
Another use case could be player fog of war where the player’s character is only synced to other clients when they are visible. This could be a tool to fight exploits by letting server scripts determine whether or not PlayerA’s character should be synced to PlayerB.
I’ve been searching for workarounds and the methods I’ve found are not exactly ideal so far.
Method:
Server sets parent of model to server camera and fires a RemoteEvent to client with the model and the original parent.
Client receives the model fine and reparents the model in the original parent.
Problems: Transition is quite seamless, but
Server and Client Scripts inside the model will re-run. Not ideal if the model is an Actor and requires the script to be inside.
Client side physics sometimes won’t properly update, an estimated 80% of the time it will take ownership of the physics, the other time it will just show up as red in DebugVisualization and not process physics on it.
Why not SetNetworkOwnership on the server side to the client? Well, you can only set network ownership to one client, but I’d want individual client to handle the physics on their end and my localscripts will determine whether it should for optimization.
I guess in this current use-case scenario. A method to give independent physics ownership to each client would suffice where physics are all done client side and don’t need to replicate back to server to broadcast.
Defaults to true, replicate physics will replicate part’s physics simulation based on network owner,
When false, server and every client simulates the physics independently. No physics replication between them.
I might want to do replication myself in a way that is more optimized and fit for my type of projects and for that I gotta disable Roblox’ replication somehow to save up data and bandwidth.
Thanks for clarifying your points! With how I’d see this managed, I think it would actually be really good to have one singular global service just regarding replication and networking. Then you could manage large groups of objects without having to keep strict pointers on everything. I definitely think if newer developers implement it should be used very sparingly, as they might not know how to manage the networking and replication (and documentation would definitely help). I would still recommend rewriting your feature request to be use case focused rather than solution focused, it helps more discussion between engineers and users as there’s room to negotiate the best solution.
I would like something like task.desynchronize() but instead task.disableReplication()
My game has a lot of instances in workspace and I can’t run ClearAllChildren() on the server because it causes a massive spike in networking traffic and will crash players with lower end connections.
I would like to be able to tell the server to delete all instances in workspace without replicating it then fire a remote to tell the client to do the same to greatly reduce the amount of network traffic.