How do you sync server tick and client tick?

I need to sync server tick and client tick for lag compensation. I’ve seem some posts telling people to use the nevermore module, but it seems like the github “installer” for that is no longer working, so that’s out of the window.
How would I go about syncing them up? Send tick from server when a player joins then add that to regular ticks on the client? Can I just set the tick discrepancy directly?

Here’s what the server would see if the player were to use the same attack over and over again without lag compensation:

With lag compensation IF the player and the server have the same tick (Got example from regularly pressiong play since that already more or less syncs server and client)


I thought about syncing local time for server and client, but couldn’t come up with a decent solution. Instead, to achieve lag compensation on the client, I model ping on the client

  1. Invoke a RemoteFunction on the client. The server just needs to return true to indicate the request was received, because I don’t trust RemoteFunctions and implement a 30 second timeout.
  2. Record the delta time between invoking and receiving. This is the ping. Store the ping in a table, along with the time stamp that the RemoteFunction was first invoked.
  3. 0.5 seconds after the previous ping began, repeat the above 2 steps. This could be straight away if the ping is that high. You can modify this to suit your needs, it’s not really intensive for the network so you can probably ping pretty much continuously, I don’t really bother. Continue repeating.
  4. Every so often, remove ping data older than 60 (or however many) seconds from the table, because it’s wasted memory.

I then have multiple areas of code, mainly for syncing abilities and animations, which need to know the ping. These call a function of the ping module, with the number of seconds of sampling required as the parameter—I usually sample the last 2 seconds of ping. The function returns the median ping (or whatever sort of statistical analysis you want to use, you might want the 80th percentile or something, but median works best for me) over the last x seconds of sample time, which is used for lag compensation.

It’s probably about 95% accurate for my use case, and it’s simple to implement. I hope I understood your use case properly.


I believe tick() on the server and client should be identical when calling it in studio. Recently, I’ve been working on a similar clock synchronization algorithm, it should do what you want but there might be room for some improvement.

-- Licensed under MIT

local INTERVAL = 60
local SAMPLES = 100
local SAMPLE_DELAY = 0

local DEBUG = false

local StarterPlayer = game:GetService("StarterPlayer")
local ReplicatedStorage = game:GetService("ReplicatedStorage")
local RunService = game:GetService("RunService")
local Players = game:GetService("Players")

local stats = require(script.Stats)

local responder = script:WaitForChild("Responder")
local remote ="RemoteFunction")

remote.Name = "Shift"
remote.Parent = ReplicatedStorage
responder.Parent = StarterPlayer.StarterPlayerScripts

local offsets = {}
local module = {}

local function filterPackets(packets)
	if #packets < 2 then
		return packets
	local median = stats.median(packets)
	local deviation = stats.deviation(packets)
	if DEBUG then
		print("Packet Stats:", median, deviation)
	local filtered = {}
	for i = 1, #packets do
		local packet = packets[i]
		if math.abs(packet - median) <= deviation then
			table.insert(filtered, packet)
	return filtered

local function performSynchronization(player)
	local serverClock0 = tick()
	local clientClock = remote:InvokeClient(player)
	local serverClock1 = tick()
	local latency = (serverClock1 - serverClock0) / 2
	return serverClock1 - (clientClock + latency)

local function calculateOffset(offsets)
	local filteredOffsets = filterPackets(offsets)
	return stats.mean(filteredOffsets)

local function synchronize(player)
	local results = {}
	local total = 0
	for i = 1, SAMPLES do
		local offset = performSynchronization(player)
		table.insert(results, offset)
		offsets[player] = calculateOffset(results)
	if DEBUG then
		print("Offset:", offsets[player])

Players.PlayerAdded:Connect(function (player)
	local function NTP()
		delay(INTERVAL, NTP)

function module:GetOffset(player)
	return offsets[player]

return module

stats is just a module with statistical functions such as standard deviation, mean, median, etc. Responder is a LocalScript that returns tick() to the RemoteFunction when it is invoked.

It works by doing the following:

  1. Client stamps current local time on a “time request” packet and sends to server
  2. Upon receipt by server, server stamps server-time and returns
  3. Upon receipt by client, client subtracts current time from sent time and divides by two to compute latency. It subtracts current time from server time to determine client-server time delta and adds in the half-latency to get the correct clock delta. (So far this algothim is very similar to SNTP)
  4. The first result should immediately be used to update the clock since it will get the local clock into at least the right ballpark (at least the right timezone!)
  5. The client repeats steps 1 through 3 five or more times, pausing a few seconds each time. Other traffic may be allowed in the interim, but should be minimized for best results
  6. The results of the packet receipts are accumulated and sorted in lowest-latency to highest-latency order. The median latency is determined by picking the mid-point sample from this ordered list.
  7. All samples above approximately 1 standard-deviation from the median are discarded and the remaining samples are averaged using an arithmetic mean.


1 Like

How accurate is it? I didn’t decide to implement this in my game, because I thought a division by 2 is too much room for error and overly assumption. My download speed is good but my upload speed is horrendous for example (I am probably not doing networking justice, but that’s my thought process.)

I haven’t experienced any issues in accuracy using my modelling of ping, but synchronisation could be useful for other use cases. I haven’t measured it, but empirically it seems accurate to within <50ms most of the time (based on my crap internet). It would be great if there was a precise equivalent of tick() that was UTC time.

It’s worthwhile to note @OP that these sorts of lag compensations could be quite insecure, as they fundamentally rely on the client, so it would be good practice not to put too much faith in them for anything that needs to be secure. In my case, I only use them to sync a brief “charging up” animation that hides the latency between telling the server that the client wants to do an ability, and the server simulating the physics effect of the ability.

I haven’t benchmarked it significantly, but I get results that I would consider usable. You’re right that having different send and receive speeds can cause the approximation to be less accurate but it’ll still be within twice the latency which isn’t too bad.

As for security, you can improve on this by logging responses from the client. Whenever the client sends over a timestamped packet compare it to the previous packet and verify the timestamps are comparable to your timestamps. This will make it impossible for the client the continuously change their clock and get meaningful results, all they’ll be doing is increasing their latency and you’ll have a way to detect that.

Should you not just use the result from the response with the lowest latency? I might be missing something, but this would require no sorting and be much simpler.

I’ve done a finished lag compensation system + a simple FPS prototype, and what i can say about it that you don’t need to sync server and client, since you’re only rewinding time by the ping of the player that shot the projectile, all you really need to know is every players ping (and obviously have at least a whole second of rewind frames recorded and stored), you’d send a remote function from client to server and wait for response, record the time passed and that’s your ping (1-5ms difference max). then send that to the server again (the server does all the compensation work so it has to store every players ping) with a remote event. at the time of the shot all the server has to do is find the players current ping and rewind all other players characters back to the shooters ping time and calculate the hit registration. mine works perfectly, my system ignores interpolation (which can be added in very easly) and it’s not 100% accurate, for that i just increased the dettection hitboxes sizes by 10% to compensate for the inaccuracy, and now it works perfectly 100% of the time.

you can test the accuracy of the system here

You can be confident that there’s no drift in the values that you get back from tick(), since the underlying operating system is accounting for that. That means that you only need to track the drift in network latency.

Cristian’s algorithm would be the easiest way to do this. TL;DR: You have the client ping the server and then the server pings back the client (or vice versa), and you assume that half of that total round trip time = the latency. This could even be done in a single call using a RemoteFunction. You can also build updates to the ping into remote functions / events that are being used anyways for other purposes.

Keep in mind that if you’re on WIFI the latency may fundamentally be very inconsistent though: Over the air there is always some relevant rate of lost data, and every time data is lost it adds a lot to the latency for that packet. You may want to try to detect and ignore outliers to fix that.

1 Like

I’ve found the answer to this problem on my own. Not quite sure how efficient it is though.

–server sends it’s ping, called serverping
local serverlatency = tick() - serverping
local ping = tick() - serverlatency
return ping
–server’s side
local clientping = tick() - ping

Seems to work just fine. I am probably doing a double ping check but :woman_shrugging:, it does it’s job just fine.

Ok here’s a way better way to do it:
get current tick
invoke the client, doesn’t matter what function as long as it doesn’t have any delay between returning (even an empty function would work)
current tick - tick from before invoking
boom bam there’s your ping.


That’s pretty much exactly what my suggestion was for you in the first place.


Sorry. I didn’t understand what you meant because of the way it was worded and because step 3&4 seemed completely different from what I was trying to accomplish. I ended up thinking that step 1&2 was purely to achieve this result I wasn’t looking for. My bad.

The point of Step 3 and 4 is to continuously poll the ping; usually, only getting it once will be inaccurate. This is because, for many players, ping can fluctuate (relatively) significantly over their playtime. By constantly getting the current ping, you can get an accurate idea of what the ping will (probably) be like for the next few seconds, by looking at the last few seconds of ping. This will allow your lag compensation to be more accurate and seamless.

Usually, you’d want to use the median ping from the last few seconds of polling, because the median is less influenced by extreme values compared to the mean.

Why do this when you can just get the ping when it matters? I.e the moment somebody attacks

Here’s what the server would see if the player were to use the same attack over and over again

Meaning there’s only 1 check with lag compensation, not a constant one that requires the ping to be checked often. You could just invoke a “hitbox check” to the client to make sure it thinks it hits something whilst also getting the ping with above method, then do the server-sided hitbox check with that ping as lag compensation. It would be far more convenient and possibly more accurate as long as the client replies straight away.

My logic is that only checking the poll once makes it a lot more inclined to random bias that makes it unrepresentative of the ping at the point where you need it: think having a lag spike at the moment the ping is polled: while a median based on a sample of several seconds is more likely to reflect the “normal” / average ping at the moment you need it.

It might not make a difference for your individual use case; I’m not entirely clear on what you’re trying to use it for based on your original post.

In my own case, I use my method to sync the client to play a “prepare” animation, while the RemoteEvent telling the server to initiate the ability is sent off. The prepare animation only takes as long as the ping (minimum 0.2 seconds, the firing of the RemoteEvent is delayed proportionally if the ping is below that). If I only used the last ping, it could: (1) be unusually large, and (2) it could be quite some time ago, depending on the actual latency itself, which makes it less accurate than a rolling average.

Even with very shady WiFi (sometimes I get 1000 ping on Roblox games at a moments notice, lol) this has been incredibly accurate for my use case.

Your use case especially might be different because it looks like you’re doing this on the server—where I am focused on client-side prediction. It could be trivial for your game, but it’s also worth noting that, if you’re invoking the client to determine their ping, then the client can artificially increase their ping by yielding during the OnClientInvoke callback. Whether this is significant depends on your game, but for this reason I would prefer to only use ping for client-side prediction in reducing perceived latency.

1 Like