Is this all your code or part of your code?
It’s all the code you are required to see to answer the question…
I suggest not firing the client. You can instead simply loop players in a for loop. It may sound more complicated but is faster than firing/invoking every client per tick.
What are you talking about?
A timer script… isn’t that what you need help with?
I don’t see how
for _, player in pairs(game:GetPlayers()) do
Remote:FireClient(player)
end
Is faster than
Remote:FireAllClients()
Are you trying to update a ui?
To be honest, your better off using an object value to fix this problem. I know it sounds depressing, I don’t like them either but I have had to use the simpler way to fix these issues in the past. Put some values in rep storage and GetPropertyChangedSignal on the values for the UI. It will reduce the latency to nothing.
How I would do it is just have an IntValue and change the value, if the value is greater than -1 display it, then when it reaches -1, hide it again and just have it connected with the .Changed signal of an IntValue.
How I would do it is just have an IntValue and change the value, if the value is greater than -1 display it, then when it reaches -1, hide it again and just have it connected with the .Changed signal of an IntValue.
Simply have the server change the IntValue.Value and have the LocalScript checking for the timer to be completed then have the server restart the timer itself when necessary.
Also, using .Changed will mean that whenever another (not intended) property is changed (such as visible in GUIs or Transparency in a Part), the function tied to that .Changed will be triggered
whereas :GetPropertyChangedSignal() applies when the specified property is changed, like below in both the Code Block and Example Place.
local Settings = game.ReplicatedStorage.Settings.Mutators
local Timer = Settings.Timer
local MaxTime = Settings.MaxTime
local Countdown = script.parent.Countdown
Timer:GetPropertyChangedSignal('Value'):Connect(function()
if Timer.Value >= Settings.MaxTime.Value then
script.Parent.Visible = true
Countdown.Text = Timer.Value
if Timer.Value <= 0 then
script.Parent.Visible = false
end
end)
Not ENTIRELY sure I got that right, I’m certain a better programmer will outright correct me or point you to a better method or otherwise explain it better.
Example Place
I made this quickly as an example of a GetPropertyChangedSignal() used in a timer.
This is actually methode I use, and personally the best one. Mine is almost the same apart that I convert time to 00:00 format before displaying.
I haven’t figured out that yet. Please feel free to upload a better version as resources like this aren’t always the easiest resources to find.
I typically just use a synced clock, then
Clock:GetTime() - StartTimeValueInWorkpace.Value
To give you the basic idea. That value would be set to Clock:GetTime() on the server.
This would of course give you the elapsed time since the countdown began which you can use to format however you want.
We made our own synced clock but quenty has one that’s pretty good. Just search up time sync manager. I’d link but I’m on mobile and it is being a pain
You’ll never get server and clients perfectly in sync, but they should really only be off by roughly the client’s ping time, not whole seconds. One problem you have is that your timer isn’t using a timer, it’s just a loop that is relying on wait(1)
to be 1 second. But wait(1)
doesn’t wait for exactly 1 second, it will wait for at least one second, usually slightly longer. Having other Lua threads running, especially ones that are doing a lot of work before yielding, can make wait(1) last for much longer than 1 second.
You should use timestamps to track the passage of time, from one of the functions available that uses the computers’ actual clocks: tick(), os.time(), or time(). Which one is best depends on the exact use case, particularly whether or not time zones and actual time matter, or just time since the server started.
You can meter out a specific amount of time by sending a message to the client to count down for, say, 10 seconds. The client can then set a variable, e.g. local endTime = tick() + 10
, and check on Heartbeat to see if tick() > endTime
, at which point the timer is done. When you use the actual clock functions like this, your client and server will only disagree by about your round-trip ping time, which is as good is it’s going to get.
Generally speaking, assuming that a server machine and client both have their real time clocks in sync is also bad. In other words, don’t generate a time stamp on the server with os.time(), and then send it to the client and compare it to the client’s os.time(). Nothing guarantees the machines’ clocks are set correctly and in sync. So only communicate relative times between client and server, and only directly compare or do arithmetic with timestamps from the same machines.
I wouldn’t necessarily agree with the last bit if you use or create your own synced clock to generate the time stamp. It won’t be in perfect sync all the time but it’ll be pretty dang close, much better than you’ll get just by doing it completely local. I’ve used synced clocks like quentys for awhile to dictate all kinds of things(projectiles are another great use case for instance). It’s a very valuable tool
Found the link to his
Yes, it’s true that by attempting to synchronize the clocks, i.e. compensate for network latency, you will get better average-case performance. The more stable the user’s connection (steady ping), the better the sync will be. In certain circumstances, where you are trying to minimize the impact of latency on gameplay, like in a FPS, or in a skill-based game where the skills have cooldown times that are near the same order of magnitude as ping times, this can be significant. But it remains true that sync better than worst-case round-trip ping cannot be counted on.
For the example at hand, a game round timer that is presumably at least a couple of minutes, I personally wouldn’t bother with the added complexity of a sync’d clock. Normally, you would display a round timer in whole seconds anyways, so rounding to the nearest second will cover typical internet latency, which is typically less than half a second (and if ping is high and spiky, no sync scheme is going to help anyways).
You’re using wait(1). This isn’t guaranteed to be 1.0 second, it’s only guaranteed to be >= 1.0 second. It could be 1 second or 1.03 or 1.1 second depending on how slow a machine is.
The post underneath has the second part of the solution:
use a .Changed event inside a localscript that shows the timer seconds?
.Changed activates on any property changed. :GetPropertyChangedSignal(‘Value’) will only activate when the specified property is changed. Use the latter in cases where you will be dealing with value objects.
But that would be good way to synchrorize it tho ^^