Things to look out for when using Network Events (RenderStepped, Stepped, Heartbeat)

I’m not particularly sure what you meant when you were explaining the difference in the time fetches between os.clock and deltaTime or why it’s particularly relevant - mind elaborating? The wording trips over itself and doesn’t explain anything except that you dislike calling a function over accessing an argument. It also mentions an issue but doesn’t explain what said issue is or why it is one.

os.clock is inexpensive to call, so talking about speed is only really relevant in the context of microoptimisation. For most production uses the call time on os.clock should not make a noticeable indent, or one at all, while using it. My personal reason for using clock is because it’s convenient and it looks nice as well. I find that code designed strictly for performance doesn’t always look great.


Regarding the test you replied with at the bottom, I don’t understand what you’re doing here completely but the output doesn’t look too bad even though there’s mentions that it is…?

I might not be looking properly but I can’t see where you’re getting 0.1-0.2 second offsets but it’d be important to know what repro you’re working with and all because a delay that large is particularly egregious for time-sensitive code. If it was less time I’d understand but this is a lot of time you’re specifying and if it’s genuinely that high it can’t be good.

If the 0.1-0.2 second offset claim is between times in the console then I think you have the incorrect numbers because none of the tenths place values are jumping out of range that high, it’s only the hundredths and thousandths that are largely moving. I would not use the console’s time as a way to benchmark and instead rely on a real benchmark like printing the time differences.

I can’t offer an informed response because I don’t fully understand what you’re doing here and the explanations felt a little esoteric and confusing. Sorry. Mind elaborating more on your test procedure?

2 Likes

So, actually you’re right.

Uh, I need to add some new info about this on that section.

Anyhow, the problem I was mentioning with os.clock was when you needed to redefine the os.clock value, on which you would be counting os.clock since the time you called it, and not since the threshold for time needed to be waited was hit.

(That sounded really weird, it’s hard to explain, here’s a paint example)

This is the same issue with normal Hearbeat too if you just set TimeSinceLastUpdate to 0, and don’t handle excess deltatime.
I just couldn’t tell you couldn’t fix this issue if you were using os.clock, however you can, it’s not obvious right away, especially since the terminology here for everything is really confusing, however

Instead of setting LastRunTime to os.clock, you have to just increase it by TimeBetweenRuns.
This would be the only thing I think you would didn’t do the best way, that’s pretty much it.

local TIME_BETWEEN_RUNS = 5
local lastRunTime = 0

RunService.Heartbeat:Connect(function()
	local timeNow = os.clock()
	if (timeNow - lastRunTime) < TIME_BETWEEN_RUNS then return end

	--lastRunTime = timeNow
	lastRunTime += TIME_BETWEEN_RUNS

	-- Code that should run after 5 seconds have passed, repeat continually
end)

This fixes that issue!

I agree with that, though I will say that the examples that I have at the moment don’t look their best. I am in the process of making them look better and more readable, and I did find some things that I can re-do and make easier.

I will continue to use DeltaTime as it is my preference, it looks better for my coding style, and also because I’m just used to it.
However, as I said, using os.clock isn’t actually as bad as I thought.

Anyhow, with that said, good night, it’s good to have some new info about things like these. I wouldn’t have looked at these things if you didn’t point them out again, so thanks.

Why exactly is that a problem? The point is that you want a reasonable point where you can check if the difference between the clock of the current scope is 5 seconds or more since the last time the reference point was set. I don’t see a difference between what you call a “problem” and what you call “not a problem”. I don’t understand what you mean with all this “excess” stuff either, nor why it’s a problem.

The code sample you proposed does not fix the “issue” (again, what issue?). Run this code in Studio and check what it does. It will arbitrarily raise the threshold required to wait until the code runs, starting with little to no wait at all until lastRunTime is greater than whatever os.clock returns. Only until lastRunTime catches up with os.clock will it start waiting for 5 seconds before continuing.

How it looks initially:

image

And how it starts ending up:

image

The reason why we set lastRunTime to os.clock is because we need to set a baseline for os.clock to compare with. The best way to set that baseline is to have os.clock compare what it returns at one point with what it returns at another point, being able to accurately see how much time has elapsed between calls. I’m not sure what sense there is in removing that baseline.

I’m not fully convinced you understand what you’re talking about and you’re pointing out a non-issue with confusing terminology to make it appear like there is an existing problem. You’re free to code as you like but don’t apply incorrect labels to practices.

Do keep in mind that you can’t get pinpoint accuracy, that’s just how the engine works. Whether you calculate time between os.clock calls or subtract deltaTime from a variable, your system will end up waiting a little bit more time than exactly n seconds. Clock is nice if you need high resolution as it goes up to one nanosecond. So in reality, there’s no inherent difference, rather there’s a negligible one, between both options. Therefore, my recommendation is to go with what’s more readable.

  • Removed anything about ‘excess delta time’, it’s a hard topic to explain and doesn’t provide much.
  • Removed section about using os.clock instead of DeltaTime.

I have some new data to add in here soon to add more depth into the topic.

I’d like to ask, what if you needed to time something such as a debounce. What scnario would you use one of these things, or instead is there a better way to do something such as debounce?

1 Like

With debounces, you’ve probably seen or even used

local debounce = true
repeat task.wait() until debounce

Obviously, this is unnecessary polling and performs bad

What I usually do is check a boolean and then use task.delay to schedule its falsification or whatever you call it

local debounce = false
function foo()
   debounce = true
   task.delay(3, function() debounce = false end)

   ...
end

game:GetService("UserInputService").InputBegan:Connect(function(Input)
   if Input.KeyCode == Enum.KeyCode.E and not debounce then
   	foo()
   end
end)
2 Likes

@ZensStarz

A way I found that’s cool I guess is by using os.clock and checking if the difference of time is enough of not. It means LITERALLY no extra background processing. (Not that’s like super important performance wise, task.delay is pretty performant, but I don’t know I just prefer this)

local DebounceTime = 3 -- Bad naming I think, not sure, but basically the time that it needs to take for the next action to happen
local LastTimeClicked = 0

local function Click()
    if os.clock() - LastTimeClicked < DebounceTime then
        return
    end

    -- Click
    LastTimeClicked = os.clock()
end

Also something extra to add, usually debounces are on the server, so make sure to try and see if it makes sense to add a per-player ‘debounce’.

1 Like

In which scnario would per-player debounces make sense to use? Something such as combat or skills, or you should instead use things like server-sided debounces?

i didnt get notified for this.

Well, I think you’ll know when you see it. I don’t have an example on the top of my mind. But think rate limiting, so if someone is asking for something on the server repeatedly even though the client shouldn’t allow that, that would be per-player.