Yeah, I’ve been running the implementation since around October last year but decided to post here now since you seemed to be looking for implementation examples. As far as I can tell it works fairly well, and I introduce around 500ms of delay by default since worst-case ping scenarios seem to be ~250ms round trips on average.
Is there any open source practical use of the module? I want to use this in my game but I’m confused on how to implement it
i am making an obby racing game and i wanted the “3, 2, 1” countdown to sync so it doesnt say “go” like .5 seconds after you actually go.
its really easy to implement, just replace tick() with clock:GetTime() or whatever
BTW, you can get away with just timestamping with tick() for your use case. It sounds like it’s not really the exact timing of the notes that matters, but the timing of notes relative to other notes. So, you can just figure out how many fractions of a second one note should play after another, and relay that information.
Of course, my sync tech works too, but since it uses some pretty hacky stuff, if Roblox decides to change their networking behavior unannounced, it could suddenly break one day with no warning. So please be wary of that.
How would I sync a Vector3 or CFrame value like what was done in the demo?
Could you explain how this works? Recently I’ve been trying to make a lag compensation system, but the ping calculations not being perfect (Because packets are not sent as the call is made) keep offsetting my predictions.
The green block is created on the client, while the red one is created on the server. Red is supposed to estimate where the position of the white block is on the client, green is where it actually is. This does a decent job, considering this block was moving at 60 studs per second, but it’s still not perfect, and I think that’s because of the delay between a packet being called and it being sent. I’d need to always know how long the previous network update’s packet replication ping from server to client.
[UPDATE]: I have rewritten this module to version 2.0. This new version should be much much more stable, relies on less hacky code to accomplish practically the same result, and will maintain the same precision level even during extremely high latency situations. If you have used the original version of this code in your games, please swap it out with this new one. Updated code is found on the github, as well as the uncopylocked demo place.
Try using the new version of this sync clock. I rewrote and took a smarter, simpler approach to predicting/circumventing unpredictability in the packet buffer.
Very useful module, I’m using it right now in my effects system to properly queue up effects to be processed. I recommend
I like this method a lot, previously I was using Quenty’s at 1 second updates but I will probably switch to yours since it’s more continuous.
I think there is a typo on line 106 where you should FireClient
not FireAllClients
and on line 52 I think total makes more sense to be math.min(count,50)
.
ReplicationPressure
is also a really cool concept, is this your substitute for UDP? I’m curious to learn more use cases (physics? non-physics?) and how you determine your constants eg module.Threshold?
Slightly off topic, but I saw in another thread that you structure your code to run mostly in heartbeat—would it be possible to share more about this?
Good catch. It should definitely be FireClient, I will update the github and uncopylocked place in a bit with the correction. Line 52 is correct.
Replication pressure I will explain in a moment
Replication pressure is basically measuring how many things are being replicated at that moment. I use game:DescendantAdded and keep a tally on the number of things being added every frame.
Why is this important? It’s important because when many things are being replicated at once, all of your remotes will get held up while the replication is occurring. So, if you parent a massive map from ServerStorage to Workspace, all of your remotes that were fired during that period will have to wait for however long it takes to replicate that map, which could be 10 seconds, a minute, etc.
So basically I track “Replication Pressure” and if the pressure is really high, I stop updating the offsets temporarily.
For your question about running code on heartbeat, I just picked heartbeat because I like heartbeat. I could’ve picked stepped and have essentially the same functionality. Picking renderstepped, however, is bad because then you cannot have your code execute in parallel to the rendering code.
I understood what replication pressure did in this specific module, I was wondering where else you used it and how you determine the Threshold constants—because it sounds like when there are many scripts using this pattern, Threshold becomes correlated with replication priority (and this leads to similar behavior as UDP with multiple priority levels).
Dang I’d hoped you structure your code in a unique way by forcing non-traditional heartbeat code in heartbeat
Replication pressure, a term I just arbitrarily named, is only used to determine when to shut off changes to the timing offsets. Just saying “now is a bad time to be calculating the synced time”, basically.
I actually do shove everything, and I mean everything, into a heartbeat update loop. I do not make use of wait()s and resuming Lua threads at all, which is probably a stupid thing but I am very stubborn like that. The biggest pain points are for UI animation, where I have to keep track of a bunch of ticks() for when animations start and stop. I have a handy easing functions module, so I can do stuff like animThingy = easing.Quad.Out(tick() - startTime)
Is this particular to you or is there a pattern I can read more about? I basically do everything w/ coroutines (except for physics and spring animations—if you’re using Quad, why not Roblox’s tween service for efficiency?) but I want to learn more about the benefits of your style.
Oh, I think we might have a misunderstanding. I actually don’t think there is any benefit to the way I structure my code. Not performance, not intuitiveness, not organization. I just do what I do because I do ¯\_(ツ)_/¯
Fwiw I spend more time refactoring than writing new code
Line 95 should subtract self.ReceiveTick
not currentTick
. It’s also slightly more accurate to calculate tick()
directly in the remote event call. You can observe the former (and have some trouble observing the latter because Luau is so fast ) by seeing on the client in Studio Play mode that math.abs(module:GetTime()-tick())
is smaller. The difference with the line 95 bug is especially noticeable if you stably artificially lower the frame rate:
local targ=10
local t0=0
_G'bindtorenderstep'(-2^32,function()
t0=os.clock()
end)
_G.stepped:bind(function()--do it on stepped not heartbeat so no race condition with module:Heartbeat
repeat until os.clock()-t0>1/targ
end)
Also, tick()
is going to be deprecated, and the equivalent is os.clock()
, so should probably switch to that.
Have you ever run into issues with the time being able to decrease (due to resyncing)?—seems like this might have sneaky bugs. Why did you decide not to eventually stop resyncing (to solve the previous question’s problem)?
Also, do you get <1ms in your games? Maybe my networking just sucks XD
This module is really awesome
For anyone lately interested in a formal comparison of where this should be used, I simulated two five-amperage sin function platforms with TimeSync and this, clear and opaque studded platform respectively:
Edit I: Clarification: The far platforms are default replication. Note the clock models “lead” because lack passive replication delay.
Video:
(Note the glass platform actually often leading in this case; Quenty’s is slightly overshooting here, which is counter-intuitive.)
Measurements:
TimeSync:
Send 0.7kb/s
Receive 0kb/s
Fluffmiceter’s:
Send 2.1kb/s
Receive 2.0kb/s
(this is over the receive .1kb/s background radiation)
Heed:
Personally, I do not believe you should be concerned about this. When a popular game like Natural Disaster Survival can average 70kb/s send and 140kb/s receive, the in-topic discussions concerning large playercounts making this inapplicable is wrong from my understanding. The old 50kb/s recommendation was per player, and has far been exceeded in just a few years. I take advantage of this in my own games, smoothly running 100kb/s receive and near equal send with all device support.
tl;dr: they are useful and support diverse usecases, but with one Fluff clock you can have slow platforms but also superior visualization of exceptionally fast bullet replication, beyond the scope of TimeSync which AFAIK is the runner up.