Os.time() vs. tick() vs. os.clock() vs.?

I need a way to get a unified time between server and client that has a higher resolution than 1 second for the purposes of syncing. So far form my research, the only unified time between these two is os.time().

tick() will return different results depending on timezone, so that’s out for this purpose. os.clock() will also return different results on both server and client.

os.time() would be fine, but of all of these it only provides results to the second with no decimals. And while the other options do have decimals, the are not useful for syncing time between the server and client.

Am I missing something? Does anyone know a way to get a single, unified time between server and client that has decimals below the seconds interval?

In the spirit of avoiding the xy problem, for what purpose do you need a unified time between the client and server?

5 Likes

DateTime would probably help with this.

tick() will be deprecated soon, so that should be out of the question as well, and os.clock() returns the cpu time Lua used in seconds.

3 Likes

os.time() = unix epoch tick() = os.time()++ os.clock() = cpu allocated time

forgot to mention tick() normalizes to timezones****** in other words its relative to where the clients or server is located in the world

2 Likes

This simply can not reliably be done with only builtin functions due to their reliance on the systems’ clock (which tends to be inconsistent between machines). Your best bet would be to use an implementation of NTP to sync the time between the client and server. For reference, here is a Lua implementation of NTP using sockets.

2 Likes

I think i mentioned in the OP, but my use-case is to sync the arrival time of moving hitbox on the server and the effect parts on the client.

The server starts the tween on its hitbox and remotes to the client with os.time() of the tween completion. The client receives the remote and also makes its tween complete at the same time. My issue is that os.time() is not fine enough detail, .5 of a second is very noticeable.

if only os.time() were higher resolution, maybe it would work.

I will look into DateTime, it looks promising.

1 Like

You could probably fire a remote event to the server and see how long it takes for the client to respond back with a exit code (fire server yields iirc).

2 Likes

This is incorrect. But you can use a remote function for this.

1 Like

There’s time() though I’m not sure if it has decimals; it’s based around when the server was launched I believe;

i am pretty sure if you use the time() on the client it will return the ammount of seconds since you have joined the server.
and from the server i thought it returns the time the server has ran
correct me if i am wrong.

Use Workspace:GetServerTimeNow().
From the devhub:

GetServerTimeNow returns the epoch time on the server with microsecond precision. The value returned by this function is adjusted for drift and smoothed monotonically, ie, it is guaranteed to be nondecreasing. This clock will progress no faster than 1.006× speed and no slower than 0.994× speed.

This function is useful for creating synchronized experiences, as it has three properties necessary for doing so: it is a real-world time clock, is monotonic and has decent precision. Essentially, it is the clients best guess of what os.clock would return on the server.

As this function relies on the server, calling it on a client before it has connected will throw an error.

11 Likes