I’m not particularly sure what you meant when you were explaining the difference in the time fetches between os.clock and deltaTime or why it’s particularly relevant - mind elaborating? The wording trips over itself and doesn’t explain anything except that you dislike calling a function over accessing an argument. It also mentions an issue but doesn’t explain what said issue is or why it is one.
os.clock is inexpensive to call, so talking about speed is only really relevant in the context of microoptimisation. For most production uses the call time on os.clock should not make a noticeable indent, or one at all, while using it. My personal reason for using clock is because it’s convenient and it looks nice as well. I find that code designed strictly for performance doesn’t always look great.
Regarding the test you replied with at the bottom, I don’t understand what you’re doing here completely but the output doesn’t look too bad even though there’s mentions that it is…?
I might not be looking properly but I can’t see where you’re getting 0.1-0.2 second offsets but it’d be important to know what repro you’re working with and all because a delay that large is particularly egregious for time-sensitive code. If it was less time I’d understand but this is a lot of time you’re specifying and if it’s genuinely that high it can’t be good.
If the 0.1-0.2 second offset claim is between times in the console then I think you have the incorrect numbers because none of the tenths place values are jumping out of range that high, it’s only the hundredths and thousandths that are largely moving. I would not use the console’s time as a way to benchmark and instead rely on a real benchmark like printing the time differences.
I can’t offer an informed response because I don’t fully understand what you’re doing here and the explanations felt a little esoteric and confusing. Sorry. Mind elaborating more on your test procedure?