Things to look out for when using Network Events (RenderStepped, Stepped, Heartbeat)

wait hasnt hurt anyone but people who think wait is bad

3 Likes

It’s hurt me several times before, I’ve had a system where I used wait to do something, bit the wait was offset because that’s just how wait works, and that resulted in timings being off, etc.

0.003 ms offset hasnt hurt anyone

2 Likes

0.003? No, it was nearly 0.1s off.

still doesn’t hurt anyone
at least name 1 case where it was a game-breaking issue

1 Like

I never said gamebreaking in any way possible? I just said it hurt me while doing something that time. just taks.wait() is better in general, but wait() isn’t game breaking.

Thank you so much for this, I’ve always had a vague idea that I’m an idiot whenever I work with run service or coroutines but seeing this article and looking up all the words I didn’t know helped me a bit I think, I’m still confused on a majority of it but I can just keep re-reading till I get it.

It depends. I think task.waits behaviour is good for certain use cases like for example waiting for a rate limit for some service ends for example, but in that case the deltatime that it returns isn’t useful for much, because what would you do with that deltatime? You’re waiting for a rate limit, so I don’t know why you would want to use DeltaTime in that case, especially since for rate limits you would be using os.clock.

It really does depend on your usage, what you’re using wait for.

And to be honest, I haven’t even used wait for anything recently, for instance, for this entire Fusion example, with quite a few scripts and still didn’t need it, a lot was event based.

I’m here! :flushed: Had some comments and questions!


Wrt your TimePassed example, I think there would be a lot of preference for just updating a variable with the last run time and comparing if the current time is greater than the time you want to wait for rather than incrementing a variable. One comparison, set only if the time has elapsed.

local TIME_BETWEEN_RUNS = 5
local lastRunTime = 0

RunService.Heartbeat:Connect(function ()
	local timeNow = os.clock()
	if (timeNow - lastRunTime) < TIME_BETWEEN_RUNS then return end

	lastRunTime = timeNow

	-- Code that should run after 5 seconds have passed, repeat continually
end)

For the third point under “Do I need to use a wait function?”, I see that you chose to spawn a yielded thread instead of using coroutine.resume. Since I’m not all too familiar with task and coroutines right now but am slowly picking things up, any reason why you specifically chose to task.spawn it? What would differ if you used coroutine.resume instead to pick the thread back up?


If this is talking about the now-deprecated wait, there’s a few problems with it and calculating numbers aren’t the reason why it’s problematic. With paraphrasing, if I recall correctly, three off the top of my head: wait runs on a legacy 30hz pipeline, wait spends additional time waiting for an open slot in the task scheduler to resume the thread and there’s a certain resume budget (this may apply to spawn and not wait; apologies if I have this wrong!).

Thought it might be good to explain the specifics on why wait is bad and what causes the “lag” to occur because otherwise it’s a little confusing trying to grasp why things happen the way they do. Wait isn’t really calculating any numbers and even then calculations aren’t too computationally heavy.


Disagreed as per the above sample I provided which compares calling times instead of incrementing a variable every frame. In practice the code you initially proposed probably has no weight at all because it’s just a straight calculation on a piece of data on the stack versus calling a function. I dunno, os.clock seems more convenient here. Any difference you notice between use of os.clock and deltaTime?


I smell a fallacy in here. “It doesn’t hurt anyone” is a poor take on what the thread is trying to teach and it’s in general a non-talking point. Please come up with something substantial when replying.

The answer to using wait in your experiences is that you should generally never have to unless your specific case calls for it. Don’t set yourself up for failure. Create event-driven systems instead so you can predictably know when to handle experience events as they are fired.

The problem isn’t necessarily wait itself (unless it’s legacy wait which then that is the problem) but rather the practice implications that arise out of its use and how developers are deploying it in their experiences, a lot of such points which are covered in the OP. As another example, a particularly egregious use of wait is as the conditional of a while loop.

5 Likes

The following conversation is hidden as it’s a hard thing to explain, and has since been removed from the topic, and doesn’t provide much value, or fix many issues.

I took for some reason too long to get what was going on here, anyway this works, but I just don’t like os.clock because it’s getting the time from the time you called it, which can differ everytime, I prefer using DeltaTime because it shouldn’t have that issue, it’s just the time since the last frame.

Also, os.clock is just a little bit slower to call, so I personally I’m not a big fan given there’s already a value I can use that RunService gives me.

Yeah actually,
so…
coroutine.resume, is racist doesn’t propagate the error to the console, so it’s kind of a pain to debug code when using it, I had a bunch of errors and I wasn’t getting them for some library I was making because I was using coroutine.resume, it was a pain to figure out what was going on.

Also coroutine.resume does seem to have some problems apparently with resuming user-created threads (or roblox created threads?) on which you had to yield the code with Heartbeat:Wait on the past so that it would give you the parameters.
I’m not entirely sure about how that worked but I know there were some issues with it.

task.spawn/defer do have their own fair share of issues, however they seem to be things that only annoy people who have code with weird behaviour.

For instance, it seems as if task.spawn/defer make it so that any thread resumed (or created) by them will not stop if the script is destroyed or it’s .Disabled becomes true.
But I would expect people to be disabling their scripts in runtime to be doing some weird stuff.

Well yeah, I was talking about wait functions in general,
for instance if you’re using something like this.

local function Wait(n)
    local spent = 0
    repeat
        spent += Heartbeat:Wait()
    until spent >= n
    return spent
end

I’m not actually talking about you know, just calculating that number, but instead the fact that using Heartbeat:Wait() would be yielding and resuming once Heartbeat fires, and when it does, it resumes that thread.

Which when you have a bunch of waits running at once would mean there would be a bunch of threads being resumed and yielded back again just to calculate one number for each wait, which could potentially have a performance penalty.

I can’t confirm how bad this is, but well, I guess if you’re using something like BetterWait on which there is only ever one thread seeing these, I would expect that to be at least better.

But yes, everything you talked about above does apply to the global wait function, so thanks for the extra info added to the post.

Spawn does run one wait() before running what you asked it to, so it would apply I guess.

Well, my biggest worry like I said, is that it would be because there could be a slight offset from the time that it actually is taking, I can’t confirm how bad this could be, but I still believe that you should just use DeltaTime to calculate these things.

Calling os.clock takes longer than just using the DeltaTime, that RunService gives you anyway, so I don’t see why not.

Let me show you an example on which I don’t like how using os.clock changes results.

local function Wait(n)
    --\\ Wait function that uses os.clock instead of deltatime to calculate time spent

    local timeNeededToReach = os.clock() + n
    repeat
        Heartbeat:Wait()
    until timeNeededToReach >= os.clock()
end

for _ = 1, 2 do
    task.spawn(function()
        Wait(2)
        print("Wait finished!")
    end
end

In this example, in certain situations, these two wait calls can be resumed at completely different frames.
And that’s because os.clock changes in these two instances, therefore the time that each of them agree to be resumed can differ.

Sure, most of the time this wouldn’t be that big of a difference, but, with time these could become unsynced.

This isn’t an issue if you’re using DeltaTime, so if you want consistency with when different things are resumed, then just go with DeltaTime, if not I guess you can use os.clock, it might be just little bit more expensive to run each frame but that’s fine.
It’s mostly personal preference, but I believe most people should just go with using DeltaTime anyhow.

The only I guess good thing with os.clock is that it has some extra decimals, but I don’t think those are important for most use cases.

I personally prefer having the behaviour of using DeltaTime than os.clock because of those reasons.

@colbert2677

After testing out, this module, using os.clock and using DeltaTime, I can confirm that using os.clock, gives you pretty unstable timing, or at least it’s not like you get can pretty stable timing with it.

image
image

As you can see, it starts to get pretty unaccurate, pretty fast.
After some time, it was already offset by 0.2 seconds.
In my case, it seemed to offset by about 0.1 seconds every minute.

Now with DeltaTime, I’m not properly handling “excess” delta time in this case.

image
image

In this case, close to os.clock but it feels like it doesn’t get as bad as quickly.

Now with DeltaTime, but properly handling “excess” delta time.

image
image

It could run for 10 minutes, and it would still be consistent, printing always at those margins, and not getting offset by any count.

Of course, it’s not like I’m handling excess time from os.clock in this example, I don’t think you can do that anyways, so :woman_shrugging:
Even if there was a better way to handle it using os.clock, there is still no reason why I would think using it instead of using the DeltaTime that RunService gives you would be a good idea.

The only good thing is that it might be more understandable to read if someone doesn’t understand how DeltaTime works, but that’s pretty much it, and I don’t think that’s a good reason, especially when it can get expensive with multiple calls.

I’m not particularly sure what you meant when you were explaining the difference in the time fetches between os.clock and deltaTime or why it’s particularly relevant - mind elaborating? The wording trips over itself and doesn’t explain anything except that you dislike calling a function over accessing an argument. It also mentions an issue but doesn’t explain what said issue is or why it is one.

os.clock is inexpensive to call, so talking about speed is only really relevant in the context of microoptimisation. For most production uses the call time on os.clock should not make a noticeable indent, or one at all, while using it. My personal reason for using clock is because it’s convenient and it looks nice as well. I find that code designed strictly for performance doesn’t always look great.


Regarding the test you replied with at the bottom, I don’t understand what you’re doing here completely but the output doesn’t look too bad even though there’s mentions that it is…?

I might not be looking properly but I can’t see where you’re getting 0.1-0.2 second offsets but it’d be important to know what repro you’re working with and all because a delay that large is particularly egregious for time-sensitive code. If it was less time I’d understand but this is a lot of time you’re specifying and if it’s genuinely that high it can’t be good.

If the 0.1-0.2 second offset claim is between times in the console then I think you have the incorrect numbers because none of the tenths place values are jumping out of range that high, it’s only the hundredths and thousandths that are largely moving. I would not use the console’s time as a way to benchmark and instead rely on a real benchmark like printing the time differences.

I can’t offer an informed response because I don’t fully understand what you’re doing here and the explanations felt a little esoteric and confusing. Sorry. Mind elaborating more on your test procedure?

2 Likes

So, actually you’re right.

Uh, I need to add some new info about this on that section.

Anyhow, the problem I was mentioning with os.clock was when you needed to redefine the os.clock value, on which you would be counting os.clock since the time you called it, and not since the threshold for time needed to be waited was hit.

(That sounded really weird, it’s hard to explain, here’s a paint example)

This is the same issue with normal Hearbeat too if you just set TimeSinceLastUpdate to 0, and don’t handle excess deltatime.
I just couldn’t tell you couldn’t fix this issue if you were using os.clock, however you can, it’s not obvious right away, especially since the terminology here for everything is really confusing, however

Instead of setting LastRunTime to os.clock, you have to just increase it by TimeBetweenRuns.
This would be the only thing I think you would didn’t do the best way, that’s pretty much it.

local TIME_BETWEEN_RUNS = 5
local lastRunTime = 0

RunService.Heartbeat:Connect(function()
	local timeNow = os.clock()
	if (timeNow - lastRunTime) < TIME_BETWEEN_RUNS then return end

	--lastRunTime = timeNow
	lastRunTime += TIME_BETWEEN_RUNS

	-- Code that should run after 5 seconds have passed, repeat continually
end)

This fixes that issue!

I agree with that, though I will say that the examples that I have at the moment don’t look their best. I am in the process of making them look better and more readable, and I did find some things that I can re-do and make easier.

I will continue to use DeltaTime as it is my preference, it looks better for my coding style, and also because I’m just used to it.
However, as I said, using os.clock isn’t actually as bad as I thought.

Anyhow, with that said, good night, it’s good to have some new info about things like these. I wouldn’t have looked at these things if you didn’t point them out again, so thanks.

Why exactly is that a problem? The point is that you want a reasonable point where you can check if the difference between the clock of the current scope is 5 seconds or more since the last time the reference point was set. I don’t see a difference between what you call a “problem” and what you call “not a problem”. I don’t understand what you mean with all this “excess” stuff either, nor why it’s a problem.

The code sample you proposed does not fix the “issue” (again, what issue?). Run this code in Studio and check what it does. It will arbitrarily raise the threshold required to wait until the code runs, starting with little to no wait at all until lastRunTime is greater than whatever os.clock returns. Only until lastRunTime catches up with os.clock will it start waiting for 5 seconds before continuing.

How it looks initially:

image

And how it starts ending up:

image

The reason why we set lastRunTime to os.clock is because we need to set a baseline for os.clock to compare with. The best way to set that baseline is to have os.clock compare what it returns at one point with what it returns at another point, being able to accurately see how much time has elapsed between calls. I’m not sure what sense there is in removing that baseline.

I’m not fully convinced you understand what you’re talking about and you’re pointing out a non-issue with confusing terminology to make it appear like there is an existing problem. You’re free to code as you like but don’t apply incorrect labels to practices.

Do keep in mind that you can’t get pinpoint accuracy, that’s just how the engine works. Whether you calculate time between os.clock calls or subtract deltaTime from a variable, your system will end up waiting a little bit more time than exactly n seconds. Clock is nice if you need high resolution as it goes up to one nanosecond. So in reality, there’s no inherent difference, rather there’s a negligible one, between both options. Therefore, my recommendation is to go with what’s more readable.

  • Removed anything about ‘excess delta time’, it’s a hard topic to explain and doesn’t provide much.
  • Removed section about using os.clock instead of DeltaTime.

I have some new data to add in here soon to add more depth into the topic.

I’d like to ask, what if you needed to time something such as a debounce. What scnario would you use one of these things, or instead is there a better way to do something such as debounce?

1 Like

With debounces, you’ve probably seen or even used

local debounce = true
repeat task.wait() until debounce

Obviously, this is unnecessary polling and performs bad

What I usually do is check a boolean and then use task.delay to schedule its falsification or whatever you call it

local debounce = false
function foo()
   debounce = true
   task.delay(3, function() debounce = false end)

   ...
end

game:GetService("UserInputService").InputBegan:Connect(function(Input)
   if Input.KeyCode == Enum.KeyCode.E and not debounce then
   	foo()
   end
end)
2 Likes

@ZensStarz

A way I found that’s cool I guess is by using os.clock and checking if the difference of time is enough of not. It means LITERALLY no extra background processing. (Not that’s like super important performance wise, task.delay is pretty performant, but I don’t know I just prefer this)

local DebounceTime = 3 -- Bad naming I think, not sure, but basically the time that it needs to take for the next action to happen
local LastTimeClicked = 0

local function Click()
    if os.clock() - LastTimeClicked < DebounceTime then
        return
    end

    -- Click
    LastTimeClicked = os.clock()
end

Also something extra to add, usually debounces are on the server, so make sure to try and see if it makes sense to add a per-player ‘debounce’.

1 Like

In which scnario would per-player debounces make sense to use? Something such as combat or skills, or you should instead use things like server-sided debounces?

i didnt get notified for this.

Well, I think you’ll know when you see it. I don’t have an example on the top of my mind. But think rate limiting, so if someone is asking for something on the server repeatedly even though the client shouldn’t allow that, that would be per-player.